aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1812.10037
2906637185
Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations which can be executed against a real-world environment such as a database. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage training data effectively and cheaply; and developing a model that generalizes to compositional utterances and complex intentions. We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms. In our framework meaning representations are described by sequences of natural language templates, where each template corresponds to a decomposed fragment of the underlying meaning representation. Although artificial, templates can be understood and paraphrased by humans to create natural utterances, resulting in parallel triples of utterances, meaning representations, and their decompositions. These allow us to train a neural semantic parser which learns to compose rules in deriving meaning representations. We crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps. We then develop neural semantic parsers which perform such compositional tasks. In general, our approach allows to deploy neural semantic parsers quickly and cheaply from a given domain ontology.
The next breakthrough came with the work of zettlemoyer:learning:2005 , who introduced CCG in semantic parsing. Their probabilistic CCG grammars can deal with long range dependencies and construct non-projective meaning representations. A great deal of work follows zettlemoyer:learning:2005 but focuses on more fine-grained problems such as grammar induction and lexicon learning @cite_70 @cite_75 @cite_20 @cite_8 @cite_53 @cite_15 @cite_52 or using less supervision @cite_80 @cite_63 . As a common paradigm, the class of work first generates candidate derivations to meaning representations governed by the grammar. These candidates derivations are scored by a trainable model which can take the form of a structured perceptron @cite_44 or a log-linear model @cite_43 . Training updates model parameters such that good derivations obtain higher scores. During inference, a CKY-style chart parsing algorithm is used to predict the most likely derivation for an utterance. Another class of work follows similar paradigm but use lambda DCS as the semantic formalism @cite_42 @cite_30 @cite_55 . Other interesting work includes joint semantic parsing and grounding @cite_35 , parsing context-dependent queries @cite_13 @cite_33 , and converting dependency trees to meaning representations @cite_49 @cite_25 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_42", "@cite_44", "@cite_43", "@cite_15", "@cite_75", "@cite_20", "@cite_8", "@cite_52", "@cite_49", "@cite_80", "@cite_70", "@cite_55", "@cite_25", "@cite_33", "@cite_53", "@cite_63", "@cite_13" ], "mid": [ "2250225488", "", "2252136820", "2111742432", "1496189301", "2467476605", "2227250678", "147290778", "2137607685", "2473222270", "2302963717", "", "", "2295690548", "2950109442", "2420948438", "2156621282", "2126170172", "2189089430" ], "abstract": [ "A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets.", "", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86 F-measure in recovering fully correct semantic analyses and 95.9 F-measure by a partial-match criterion, a more than 5 improvement over the 90.3 partial-match figure reported by He and Young (2006).", "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.", "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70 error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "We consider the problem of learning factored probabilistic CCG grammars for semantic parsing from data containing sentences paired with logical-form meaning representations. Traditional CCG lexicons list lexical items that pair words and phrases with syntactic and semantic content. Such lexicons can be inefficient when words appear repeatedly with closely related lexical content. In this paper, we introduce factored lexicons, which include both lexemes to model word meaning and templates to model systematic variation in word usage. We also present an algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model. Evaluations on benchmark datasets demonstrate that the approach learns highly accurate parsers, whose generalization performance benefits greatly from the lexical factoring.", "We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependency-parsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-the-art accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80 precision and 56 recall, despite never having seen an annotated logical form.", "We present methods to control the lexicon size when learning a Combinatory Categorial Grammar semantic parser. Existing methods incrementally expand the lexicon by greedily adding entries, considering a single training datapoint at a time. We propose using corpus-level statistics for lexicon learning decisions. We introduce voting to globally consider adding entries to the lexicon, and pruning to remove entries no longer required to explain the training data. Our methods result in state-of-the-art performance on the task of executing sequences of natural language instructions, achieving up to 25 error reduction, with lexicons that are up to 70 smaller and are qualitatively less noisy.", "Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.", "The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast---partly due to the lack of a strong type system---dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and WebQuestions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.", "", "", "Semantic parsers conventionally construct logical forms bottom-up in a fixed order, resulting in the generation of many extraneous partial logical forms. In this paper, we combine ideas from imitation learning and agenda-based parsing to train a semantic parser that searches partial logical forms in a more strategic order. Empirically, our parser reduces the number of constructed partial logical forms by an order of magnitude, and obtains a 6x-9x speedup over fixed-order parsing, while maintaining comparable accuracy.", "Universal Dependencies (UD) offer a uniform cross-lingual syntactic representation, with the aim of advancing multilingual applications. Recent work shows that semantic parsing can be accomplished by transforming syntactic dependencies to logical forms. However, this work is limited to English, and cannot process dependency graphs, which allow handling complex phenomena such as control. In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs. We perform experiments on question answering against Freebase and provide German and Spanish translations of the WebQuestions and GraphQuestions datasets to facilitate multilingual evaluation. Results show that UDepLambda outperforms strong baselines across languages and datasets. For English, it achieves a 4.9 F1 point improvement over the state-of-the-art on GraphQuestions. Our code and data can be downloaded at this https URL.", "We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.", "We present an approach to learning a model-theoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as \"Republican front-runner from Texas\" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entity-linked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot.", "In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the F REE 917 and W EB Q UESTIONS benchmark datasets show our semantic parser improves over the state of the art.", "The context in which language is used provides a strong signal for learning to recover its meaning. In this paper, we show it can be used within a grounded CCG semantic parsing approach that learns a joint model of meaning and context for interpreting and executing natural language instructions, using various types of weak supervision. The joint nature provides crucial benefits by allowing situated cues, such as the set of visible objects, to directly influence learning. It also enables algorithms that learn while executing instructions, for example by trying to replicate human actions. Experiments on a benchmark navigational dataset demonstrate strong performance under differing forms of supervision, including correctly executing 60 more instruction sets relative to the previous state of the art." ] }
1812.10071
2906542368
Many semantic video analysis tasks can benefit from multiple, heterogenous signals. For example, in addition to the original RGB input sequences, sequences of optical flow are usually used to boost the performance of human action recognition in videos. To learn from these heterogenous input sources, existing methods reply on two-stream architectural designs that contain independent, parallel streams of Recurrent Neural Networks (RNNs). However, two-stream RNNs do not fully exploit the reciprocal information contained in the multiple signals, let alone exploit it in a recurrent manner. To this end, we propose in this paper a novel recurrent architecture, termed Coupled Recurrent Network (CRN), to deal with multiple input sources. In CRN, the parallel streams of RNNs are coupled together. Key design of CRN is a Recurrent Interpretation Block (RIB) that supports learning of reciprocal feature representations from multiple signals in a recurrent manner. Different from RNNs which stack the training loss at each time step or the last time step, we propose an effective and efficient training strategy for CRN. Experiments show the efficacy of the proposed CRN. In particular, we achieve the new state of the art on the benchmark datasets of human action recognition and multi-person pose estimation.
Countless learning tasks require dealing with sequential data. Image captioning @cite_0 , speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis @cite_35 , and musical information retrieval, a model must learn from inputs that are sequences. RNNs is the model which can handle dynamics of sequences via cycles in the network of nodes. RNNs also extends its success to sequential data with multiple modalities, e.g., they can deal with text and images as the input sources simultaneously for better image recognition.
{ "cite_N": [ "@cite_0", "@cite_35" ], "mid": [ "1811254738", "2182762369" ], "abstract": [ "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "Abstract : For most people, watching a brief video and describing what happened (inwords) is an easy task. For machines, extracting the meaning from video pixelsand generating a sentence description is a very complex problem. The goal of myresearch is to develop models that can automatically generate natural language(NL) descriptions for events in videos. As a first step, this proposal presentsdeep recurrent neural network models for video to text generation. I build onrecent deep machine learning approaches to develop video description modelsusing a unified deep neural network with both convolutional and recurrentstructure. This technique treats the video domain as another language andtakes a machine translation approach using the deep network to translate videosto text. In my initial approach, I adapt a model that can learn on images andcaptions to transfer knowledge from this auxiliary task to generate descriptionsfor short video clips. Next, I present an end-to-end deep network that can jointlymodel a sequence of video frames and a sequence of words. The second part ofthe proposal outlines a set of models to significantly extend work in this area.Specifically, I propose techniques to integrate linguistic knowledge from plaintext corpora; and attention methods to focus on objects and track their interactionsto generate more diverse and accurate descriptions. To move beyondshort video clips, I also outline models to process multi-activity movie videos,learning to jointly segment and describe coherent event sequences. I proposefurther extensions to take advantage of movie scripts and subtitle informationto generate richer descriptions." ] }
1812.10071
2906542368
Many semantic video analysis tasks can benefit from multiple, heterogenous signals. For example, in addition to the original RGB input sequences, sequences of optical flow are usually used to boost the performance of human action recognition in videos. To learn from these heterogenous input sources, existing methods reply on two-stream architectural designs that contain independent, parallel streams of Recurrent Neural Networks (RNNs). However, two-stream RNNs do not fully exploit the reciprocal information contained in the multiple signals, let alone exploit it in a recurrent manner. To this end, we propose in this paper a novel recurrent architecture, termed Coupled Recurrent Network (CRN), to deal with multiple input sources. In CRN, the parallel streams of RNNs are coupled together. Key design of CRN is a Recurrent Interpretation Block (RIB) that supports learning of reciprocal feature representations from multiple signals in a recurrent manner. Different from RNNs which stack the training loss at each time step or the last time step, we propose an effective and efficient training strategy for CRN. Experiments show the efficacy of the proposed CRN. In particular, we achieve the new state of the art on the benchmark datasets of human action recognition and multi-person pose estimation.
Some LSTM-based two-stream networks have been proposed as well. @cite_16 @cite_18 proposed to train video recognition models using LSTMs that capture temporal state dependencies and explicitly model short snippets of ConvNet activations. @cite_8 demonstrated that two-stream LSTMs outperform improved dense trajectories (iDT) @cite_3 and two-stream CNNs @cite_29 , although they needed to pre-train their architecture on one million sports videos. VideoLSTM @cite_1 applies convolutional operations within LSTM on sequences of images or feature maps. Additionally, an attention model is stacked on top of the ConvLSTM to further refine the temporal features. Sun et.al @cite_26 also propose a lattice LSTM for the long and complex temporal modeling. These two-stream LSTMs were all trained independently and combined on the probability level. Even lattice LSTM has joint training on the gates between the two streams, their representations are not completely coupled.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_29", "@cite_1", "@cite_3", "@cite_16" ], "mid": [ "", "2963447094", "1923404803", "2156303437", "2963246338", "2105101328", "2951183276" ], "abstract": [ "", "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities.", "Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Abstract We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized." ] }
1812.10071
2906542368
Many semantic video analysis tasks can benefit from multiple, heterogenous signals. For example, in addition to the original RGB input sequences, sequences of optical flow are usually used to boost the performance of human action recognition in videos. To learn from these heterogenous input sources, existing methods reply on two-stream architectural designs that contain independent, parallel streams of Recurrent Neural Networks (RNNs). However, two-stream RNNs do not fully exploit the reciprocal information contained in the multiple signals, let alone exploit it in a recurrent manner. To this end, we propose in this paper a novel recurrent architecture, termed Coupled Recurrent Network (CRN), to deal with multiple input sources. In CRN, the parallel streams of RNNs are coupled together. Key design of CRN is a Recurrent Interpretation Block (RIB) that supports learning of reciprocal feature representations from multiple signals in a recurrent manner. Different from RNNs which stack the training loss at each time step or the last time step, we propose an effective and efficient training strategy for CRN. Experiments show the efficacy of the proposed CRN. In particular, we achieve the new state of the art on the benchmark datasets of human action recognition and multi-person pose estimation.
Not just action recognition, other computer vision tasks also consider using multiple branches to improve the accuracy. Based on the multi-stage work of @cite_7 , @cite_17 presents a real-time pose estimation method. They add a bottom-up representation of association scores via part affinity fields (PAFs). By adding the joint associate network parallel to the joint detection network, the multi-person pose estimation can be well improved.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "2964304707", "2951856387" ], "abstract": [ "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency." ] }
1812.09832
2905751765
Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations in the sharpness of details, distinct image translation and identity preservation. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image and transfers domains based on the extracted texture. The approach utilizes the texture to transfer facial attributes and expressions without the consideration of the object pose. This leads to shaper details and more distinct visual effect of the synthesized faces. In addition, it brings the faster convergence during training. The effectiveness of the proposed method is validated through extensive ablation studies. We also evaluate our approach qualitatively and quantitatively on facial attribute and facial expression synthesis. The results on both the CelebA and RaFD datasets suggest that Texture Deformation Based GAN achieves better performance.
@cite_2 is a novel generative model which decomposes the input image into texture and deformation. DAE follows the deformable template paradigm and models image generation through texture synthesis and spatial deformation. DAE can obtain the prototypical object by removing the deformation. Discarding variability due to deformations, the texture encoded from the original image is a purer representation. Moreover, by modeling the face image in terms of a low-dimensional latent code, we can more easily control the facial attributes and expression over the generative process.
{ "cite_N": [ "@cite_2" ], "mid": [ "2807725536" ], "abstract": [ "In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner. As in the deformable template paradigm, shape is represented as a deformation between a canonical coordinate system ( template') and an observed image, while appearance is modeled in canonical', template, coordinates, thus discarding variability due to deformations. We introduce novel techniques that allow this approach to be deployed in the setting of autoencoders and show that this method can be used for unsupervised group-wise image alignment. We show experiments with expression morphing in humans, hands, and digits, face manipulation, such as shape and appearance interpolation, as well as unsupervised landmark localization. A more powerful form of unsupervised disentangling becomes possible in template coordinates, allowing us to successfully decompose face images into shading and albedo, and further manipulate face images." ] }
1812.09832
2905751765
Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations in the sharpness of details, distinct image translation and identity preservation. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image and transfers domains based on the extracted texture. The approach utilizes the texture to transfer facial attributes and expressions without the consideration of the object pose. This leads to shaper details and more distinct visual effect of the synthesized faces. In addition, it brings the faster convergence during training. The effectiveness of the proposed method is validated through extensive ablation studies. We also evaluate our approach qualitatively and quantitatively on facial attribute and facial expression synthesis. The results on both the CelebA and RaFD datasets suggest that Texture Deformation Based GAN achieves better performance.
@cite_11 is a promising generative model and can be used to solve various computer vision tasks such as image generation @cite_18 @cite_7 @cite_22 , image translation @cite_17 @cite_5 @cite_9 , and face image editing @cite_16 @cite_25 @cite_21 . The GAN model is mainly designed to learn a generator G to generate fake samples and a discriminator D to distinguish between real and fake samples. Besides leveraging the typical adversarial loss, a reconstruction loss is often employed @cite_25 @cite_20 to generate the faces as realistic as possible. Additionally, an identity loss is proposed to assure that the generated faces preserve the original identity in our approach.
{ "cite_N": [ "@cite_18", "@cite_11", "@cite_22", "@cite_7", "@cite_9", "@cite_21", "@cite_5", "@cite_16", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2952010110", "", "", "2963567641", "2608015370", "2797823148", "2962793481", "2964118024", "", "2770587987", "" ], "abstract": [ "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.", "", "", "This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.", "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "In this paper, we present an integrated system for automatically generating and editing face images through face swapping, attribute-based editing, and random face parts synthesis. The proposed system is based on a deep neural network that variationally learns the face and hair regions with large-scale face image datasets. Different from conventional variational methods, the proposed network represents the latent spaces individually for faces and hairs. We refer to the proposed network as region-separative generative adversarial network (RSGAN). The proposed network independently handles face and hair appearances in the latent spaces, and then, face swapping is achieved by replacing the latent-space representations of the faces, and reconstruct the entire face image with them. This approach in the latent space robustly performs face swapping even for images which the previous methods result in failure due to inappropriate fitting or the 3D morphable models. In addition, the proposed system can further edit face-swapped images with the same network by manipulating visual attributes or by composing them with randomly generated face or hair parts.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https: github.com Prinsphield ELEGANT.", "", "Facial attribute editing aims to modify either single or multiple attributes on a face image. Since it is practically infeasible to collect images with arbitrarily specified attributes for each person, the generative adversarial net (GAN) and the encoder-decoder architecture are usually incorporated to handle this task. With the encoder-decoder architecture, arbitrary attribute editing can then be conducted by decoding the latent representation of the face image conditioned on the specified attributes. A few existing methods attempt to establish attribute-independent latent representation for arbitrarily changing the attributes. However, since the attributes portray the characteristics of the face image, the attribute-independent constraint on the latent representation is excessive. Such constraint may result in information loss and unexpected distortion on the generated images (e.g. over-smoothing), especially for those identifiable attributes such as gender, race etc. Instead of imposing the attribute-independent constraint on the latent representation, we introduce an attribute classification constraint on the generated image, just requiring the correct change of the attributes. Meanwhile, reconstruction learning is introduced in order to guarantee the preservation of all other attribute-excluding details on the generated image, and adversarial learning is employed for visually realistic generation. Moreover, our method can be naturally extended to attribute intensity manipulation. Experiments on the CelebA dataset show that our method outperforms the state-of-the-arts on generating realistic attribute editing results with facial details well preserved.", "" ] }
1812.09832
2905751765
Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations in the sharpness of details, distinct image translation and identity preservation. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image and transfers domains based on the extracted texture. The approach utilizes the texture to transfer facial attributes and expressions without the consideration of the object pose. This leads to shaper details and more distinct visual effect of the synthesized faces. In addition, it brings the faster convergence during training. The effectiveness of the proposed method is validated through extensive ablation studies. We also evaluate our approach qualitatively and quantitatively on facial attribute and facial expression synthesis. The results on both the CelebA and RaFD datasets suggest that Texture Deformation Based GAN achieves better performance.
@cite_17 is a typical image-to-image translation based method. The approach can learn the mapping between input and output domains and has achieved impressive results in several image translation tasks @cite_5 @cite_9 @cite_24 . Pix2Pix combines adversarial loss with L1 loss to transfer images in a paired way. For unpaired images, several frameworks like MUNIT @cite_0 , CycleGAN @cite_5 , and Invertible Conditional GAN @cite_13 have been proposed. However, all the frameworks try to learn the joint distribution between two domains, which limits them to handle multiple domains at the same time.
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_0", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2608015370", "", "2797650215", "2962793481", "2552611751", "" ], "abstract": [ "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "", "Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.", "" ] }
1812.09832
2905751765
Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations in the sharpness of details, distinct image translation and identity preservation. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image and transfers domains based on the extracted texture. The approach utilizes the texture to transfer facial attributes and expressions without the consideration of the object pose. This leads to shaper details and more distinct visual effect of the synthesized faces. In addition, it brings the faster convergence during training. The effectiveness of the proposed method is validated through extensive ablation studies. We also evaluate our approach qualitatively and quantitatively on facial attribute and facial expression synthesis. The results on both the CelebA and RaFD datasets suggest that Texture Deformation Based GAN achieves better performance.
@cite_20 is a multiple facial attribute editing model that contains three components at training: the attribute classification constraint, the reconstruction learning and the adversarial learning. The content that latent representation deliveries is uncertain and limited. Hence, imposing the attribute label to the latent representation might change other unexpected parts. Similar to StarGAN, AttGAN applies an attributes classification constraint to guarantee the correct attribute manipulation on the generated image and a reconstruction learning to preserve the attribute-excluding details. AttGAN tries to free the attribute-independent constraint from the latent representation, while our approach encodes the input to different latent representation to generate texture and employ an image-to-image translation to achieve face editing.
{ "cite_N": [ "@cite_20" ], "mid": [ "2770587987" ], "abstract": [ "Facial attribute editing aims to modify either single or multiple attributes on a face image. Since it is practically infeasible to collect images with arbitrarily specified attributes for each person, the generative adversarial net (GAN) and the encoder-decoder architecture are usually incorporated to handle this task. With the encoder-decoder architecture, arbitrary attribute editing can then be conducted by decoding the latent representation of the face image conditioned on the specified attributes. A few existing methods attempt to establish attribute-independent latent representation for arbitrarily changing the attributes. However, since the attributes portray the characteristics of the face image, the attribute-independent constraint on the latent representation is excessive. Such constraint may result in information loss and unexpected distortion on the generated images (e.g. over-smoothing), especially for those identifiable attributes such as gender, race etc. Instead of imposing the attribute-independent constraint on the latent representation, we introduce an attribute classification constraint on the generated image, just requiring the correct change of the attributes. Meanwhile, reconstruction learning is introduced in order to guarantee the preservation of all other attribute-excluding details on the generated image, and adversarial learning is employed for visually realistic generation. Moreover, our method can be naturally extended to attribute intensity manipulation. Experiments on the CelebA dataset show that our method outperforms the state-of-the-arts on generating realistic attribute editing results with facial details well preserved." ] }
1812.09551
2952522726
Taxonomy construction is not only a fundamental task for semantic analysis of text corpora, but also an important step for applications such as information filtering, recommendation, and Web search. Existing pattern-based methods extract hypernym-hyponym term pairs and then organize these pairs into a taxonomy. However, by considering each term as an independent concept node, they overlook the topical proximity and the semantic correlations among terms. In this paper, we propose a method for constructing topic taxonomies, wherein every node represents a conceptual topic and is defined as a cluster of semantically coherent concept terms. Our method, TaxoGen, uses term embeddings and hierarchical clustering to construct a topic taxonomy in a recursive fashion. To ensure the quality of the recursive process, it consists of: (1) an adaptive spherical clustering module for allocating terms to proper levels when splitting a coarse topic into fine-grained ones; (2) a local embedding module for learning term embeddings that maintain strong discriminative power at different levels of the taxonomy. Our experiments on two real datasets demonstrate the effectiveness of TaxoGen compared with baseline methods.
There have also been (semi-)supervised learning methods for taxonomy construction @cite_25 @cite_7 . Basically these methods extract lexical features and learn a classifier that categorizes term pairs into relations or non-relations, based on curated training data of hypernym-hyponym pairs @cite_28 @cite_2 @cite_33 @cite_11 , or syntactic contextual information harvested from NLP tools @cite_21 @cite_34 . Recent techniques @cite_32 @cite_20 @cite_24 @cite_17 @cite_6 in this category leverage pre-trained word embeddings and then use curated hypernymy relation datasets to learn a relation classifier. However, the training data for all these methods are limited to extracting hypernym-hyponym relations and cannot be easily adapted for constructing a topic taxonomy. Furthermore, for massive domain-specific text data (like scientific publication data we used in this work), it is hardly possible to collect a rich set of supervised information from experts. Therefore, we focus on technical developments in unsupervised taxonomy construction.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_28", "@cite_21", "@cite_32", "@cite_17", "@cite_6", "@cite_24", "@cite_2", "@cite_34", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "1603863883", "38703128", "2155734303", "2138605095", "1516501661", "2168565044", "2561885448", "", "2164746077", "2096315187", "1575726881", "2210137989", "2100071287" ], "abstract": [ "Collaborative tagging allows users to tag online resources. We refer to the large database of tags and their relationships as a tag space. In a tag space, the popularity and correlation amongst tags capture the current social interests, and taxonomy is a useful way to organize these tags. As tags change over time, it is imperative to incorporate the temporal tag evolution into the taxonomies. In this paper, we formalize the problem of evolutionary taxonomy generation over a large database of tags. The proposed evolutionary taxonomy framework consists of two key features. Firstly, we develop a novel context-aware edge selection algorithm for taxonomy extraction. Secondly, we propose several algorithms for evolutionary taxonomy fusion. We conduct an extensive performance study using a very large real-life dataset (i.e., Del.ici.ous). The empirical results clearly show that our approach is effective and efficient.", "Although many algorithms have been developed to harvest lexical resources, few organize the mined terms into taxonomies. We propose (1) a semi-supervised algorithm that uses a root concept, a basic level concept, and recursive surface patterns to learn automatically from the Web hyponym-hypernym pairs subordinated to the root; (2) a Web based concept positioning procedure to validate the learned pairs' is-a relations; and (3) a graph algorithm that derives from scratch the integrated taxonomy structure of all the terms. Comparing results with WordNet, we find that the algorithm misses some concepts and links, but also that it discovers many additional ones lacking in WordNet. We evaluate the taxonomization power of our method on reconstructing parts of the WordNet taxonomy. Experiments show that starting from scratch, the algorithm can reconstruct 62 of the WordNet taxonomy for the regions tested.", "This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness.", "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.", "This work is concerned with distinguishing different semantic relations which exist between distributionally similar words. We compare a novel approach based on training a linear Support Vector Machine on pairs of feature vectors with state-of-the-art methods based on distributional similarity. We show that the new supervised approach does better even when there is minimal information about the target words in the training data, giving a 15 reduction in error rate over unsupervised approaches.", "Semantic hierarchy construction aims to build structures of concepts linked by hypernym‐hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym‐hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74 , outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29 .", "Comunicacio presentada a la Conference on Empirical Methods in Natural Language Processing celebrada els dies 1 a 5 de novembre de 2016 a Austin, Texas.", "", "One of the core services provided by OWL reasoners is classification : the discovery of all subclass relationships between class names occurring in an ontology. Discovering these relations can be computationally expensive, particularly if individual subsumption tests are costly or if the number of class names is large. We present a classification algorithm which exploits partial information about subclass relationships to reduce both the number of individual tests and the cost of working with large ontologies. We also describe techniques for extracting such partial information from existing reasoners. Empirical results from a prototypical implementation demonstrate substantial performance improvements compared to existing algorithms and implementations.", "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.", "", "Hypernymy identification aims at detecting if is A relationship holds between two words or phrases. Most previous methods are based on lexical patterns or the Distributional Inclusion Hypothesis, and the accuracy of such methods is not ideal. In this paper, we propose a simple yet effective supervision framework to identify hypernymy relations using distributed term representations (a.k.a term embeddings). First, we design a distance-margin neural network to learn term embeddings based on some pre-extracted hypernymy data. Then, we apply such embeddings as term features to identify positive hypernymy pairs through a supervision method. Experimental results demonstrate that our approach outperforms other supervised methods on two popular datasets and the learned term embeddings has better quality than existing term distributed representations with respect to hypernymy identification.", "Taxonomies, especially the ones in specific domains, are becoming indispensable to a growing number of applications. State-of-the-art approaches assume there exists a text corpus to accurately characterize the domain of interest, and that a taxonomy can be derived from the text corpus using information extraction techniques. In reality, neither assumption is valid, especially for highly focused or fast-changing domains. In this paper, we study a challenging problem: Deriving a taxonomy from a set of keyword phrases. A solution can benefit many real life applications because i) keywords give users the flexibility and ease to characterize a specific domain; and ii) in many applications, such as online advertisements, the domain of interest is already represented by a set of keywords. However, it is impossible to create a taxonomy out of a keyword set itself. We argue that additional knowledge and contexts are needed. To this end, we first use a general purpose knowledgebase and keyword search to supply the required knowledge and context. Then we develop a Bayesian approach to build a hierarchical taxonomy for a given set of keywords. We reduce the complexity of previous hierarchical clustering approaches from O(n2 log n) to O(n log n), so that we can derive a domain specific taxonomy from one million keyword phrases in less than an hour. Finally, we conduct comprehensive large scale experiments to show the effectiveness and efficiency of our approach. A real life example of building an insurance-related query taxonomy illustrates the usefulness of our approach for specific domains." ] }
1812.09670
2964162217
Detecting vehicles with strong robustness and high efficiency has become one of the key capabilities of fully autonomous driving cars. This topic has already been widely studied by GPU-accelerated deep learning approaches using image sensors and 3D LiDAR, however, few studies seek to address it with a horizontally mounted 2 @math laser scanner. 2 @math laser scanner is equipped on almost every autonomous vehicle for its superiorities in the field of view, lighting invariance, high accuracy and relatively low price. In this paper, we propose a highly efficient search-based L-Shape fitting algorithm for detecting positions and orientations of vehicles with a 2D laser scanner. Differing from the approach to formulating L-Shape fitting as a complex optimization problem, our method decomposes the L-Shape fitting into two steps: L-Shape vertexes searching and L-Shape corner localization. Our approach is computationally efficient due to its minimized complexity. In on-road experiments, our approach is capable of adapting to various circumstances with high efficiency and robustness.
Also some other approaches were developed using volumetric data with 3D LiDARs, among which some choose sequential projections of point clouds @cite_6 , @cite_9 , others choose to train up neural networks that can cope with unordered point cloud data with abstract feature learning, like in VoxelNet and PointNet. However, these approaches consume considerable computational resources and need a large-scale labeled data set for training, not to mention the sensors themselves are much more expensive than those for 2D ranging.
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "2337890890", "2211722331" ], "abstract": [ "In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second." ] }
1812.09537
2906641606
Motivation: Traditional computational cluster schedulers are based on user inputs and run time needs request for memory and CPU, not IO. Heavily IO bound task run times, like ones seen in many big data and bioinformatics problems, are dependent on the IO subsystems scheduling and are problematic for cluster resource scheduling. The problematic rescheduling of IO intensive and errant tasks is a lost resource. Understanding the conditions in both successful and failed tasks and differentiating them could provide knowledge to enhancing cluster scheduling and intelligent resource optimization. Results: We analyze a production computational cluster contributing 6.7 thousand CPU hours to research over two years. Through this analysis we develop a machine learning task profiling agent for clusters that attempts to predict failures between identically provision requested tasks.
OVIS @cite_11 has attempted to address scheduling of resources based on predictive failure analysis but not on the user requested resource allocation. Additionally, OVIS's scope was limited to resource allocation improvements to address task scheduling around node failures not optimizing cluster resources. While a given node’s failure will impact scheduling to that node and is impacted by the Mean Time To Failure (MTTF), the resource scheduling to a given node is a function of the given node’s failure probability. Thus, OVIS attempts to address the issue of resource failures within a cluster by working around hardware failures at the node level.
{ "cite_N": [ "@cite_11" ], "mid": [ "2162485312" ], "abstract": [ "Traditional cluster monitoring approaches consider nodes in singleton, using manufacturer-specified extreme limits as thresholds for failure \"prediction\". We have developed a tool, OVIS, for monitoring and analysis of large computational platforms which, instead, uses a statistical approach to characterize single device behaviors from those of a large number of statistically similar devices. Baseline capabilities of OVIS include the visual display of deterministic information about state variables (e.g., temperature, CPU utilization, fan speed) and their aggregate statistics. Visual consideration of the cluster as a comparative ensemble, rather than as singleton nodes, is an easy and useful method for tuning cluster configuration and determining effects of realtime changes. Additionally, OVIS incorporates a novel Bayesian inference scheme to dynamically infer models for the normal behavior of a system and to determine bounds on the probability of values evinced in the system. Individual node values that are unlikely given the current applicable model are flagged as aberrant. This can be a much earlier indicator of problems than waiting for the crossing of some threshold that is necessarily set high to preclude too many false alarms. We present OVIS and discuss its applications in cluster configuration and environmental tuning and to abnormality and problem discovery in our production clusters." ] }
1812.09537
2906641606
Motivation: Traditional computational cluster schedulers are based on user inputs and run time needs request for memory and CPU, not IO. Heavily IO bound task run times, like ones seen in many big data and bioinformatics problems, are dependent on the IO subsystems scheduling and are problematic for cluster resource scheduling. The problematic rescheduling of IO intensive and errant tasks is a lost resource. Understanding the conditions in both successful and failed tasks and differentiating them could provide knowledge to enhancing cluster scheduling and intelligent resource optimization. Results: We analyze a production computational cluster contributing 6.7 thousand CPU hours to research over two years. Through this analysis we develop a machine learning task profiling agent for clusters that attempts to predict failures between identically provision requested tasks.
Larger scale system monitoring of HPC and the Open Science Grid @cite_24 have used such systems as OVIS @cite_11 or TACC Stats @cite_1 . OVIS uses a Bayesian inference scheme to dynamically infer models for the normal behavior of a system and to determine bounds on the probability of values evinced in the system. OVIS addresses hardware related failure issues and system level performance analysis on systems based on MTTF analysis of a given system.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_11" ], "mid": [ "2124088880", "2011516616", "2162485312" ], "abstract": [ "The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support its use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.", "This paper reports on a comprehensive, fully automated resource use monitoring package, TACC Stats, which enables both consultants, users and other stakeholders in an HPC system to systematically and actively identify jobs applications that could benefit from expert support and to aid in the diagnosis of software and hardware issues. TACC Stats continuously collects and analyzes resource usage data for every job run on a system and differs significantly from conventional profilers because it requires no action on the part of the user or consultants -- it is always collecting data on every node for every job. TACC Stats is open source and downloadable, configurable and compatible with general Linux-based computing platforms, and extensible to new CPU architectures and hardware devices. It is meant to provide a comprehensive resource usage monitoring solution. In addition to describing TACC Stats, the paper illustrates its application to identifying production jobs which have inefficient resource use characteristics.", "Traditional cluster monitoring approaches consider nodes in singleton, using manufacturer-specified extreme limits as thresholds for failure \"prediction\". We have developed a tool, OVIS, for monitoring and analysis of large computational platforms which, instead, uses a statistical approach to characterize single device behaviors from those of a large number of statistically similar devices. Baseline capabilities of OVIS include the visual display of deterministic information about state variables (e.g., temperature, CPU utilization, fan speed) and their aggregate statistics. Visual consideration of the cluster as a comparative ensemble, rather than as singleton nodes, is an easy and useful method for tuning cluster configuration and determining effects of realtime changes. Additionally, OVIS incorporates a novel Bayesian inference scheme to dynamically infer models for the normal behavior of a system and to determine bounds on the probability of values evinced in the system. Individual node values that are unlikely given the current applicable model are flagged as aberrant. This can be a much earlier indicator of problems than waiting for the crossing of some threshold that is necessarily set high to preclude too many false alarms. We present OVIS and discuss its applications in cluster configuration and environmental tuning and to abnormality and problem discovery in our production clusters." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
Ghavipour and Meybodi @cite_9 has introduced a fuzzy method in which, the level of users trust to each other is defined as fuzzy. For this purpose, they propose a method to adjust membership functions of fuzzy trust and distrust in recommender systems by using learning automata.
{ "cite_N": [ "@cite_9" ], "mid": [ "2531287176" ], "abstract": [ "We propose a learning automata-based method for optimizing membership functions.The proposed method adjusts the number and the position of membership functions.The proposed method can be used without any change in any fuzzy recommender system.The performance of proposed method is tested on well-known datasets.The results show that the proposed method improves the recommendation accuracy. Incorporating trust and distrust information into collaborative recommender systems alleviates data sparsity and cold start problems. Since trust and distrust are a gradual phenomenon, they can be stated more naturally by fuzzy logic. Finding the most appropriate fuzzy sets which cover the domains of trust and distrust is not an easy task. Existing research on fuzzy modelling of trust and distrust has not considered the optimization of membership functions. In this paper, we address this issue and propose a continuous action-set learning automata (CALA)-based method to adjust membership functions of fuzzy trust and distrust during the lifetime of recommender system in terms of recommendation error. By assigning a CALA to the centre parameter of each triangular membership function, the proposed method optimizes the number and the position of fuzzy sets. To the best of our knowledge, this is the first effort in this direction. The experimental results indicate that using the proposed method in fuzzy recommender systems improves the recommendation accuracy." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
@cite_6 presented a clustering algorithm to solve gray-sheep users problem in recommender systems. They demonstrated that collaborative filtering algorithms fail to make accurate recommendations for gray-sheep users, so they proposed k-means clustering algorithm to identify these users and make reliable recommendations for them by using their content-based profiles. They also introduced new improved centroid selection approaches and distance measures for the k-means clustering algorithm. The results showed that the centroid selection approaches did not considerably affect the cluster quality; but the distance measure can alter the performance of the clustering algorithm.
{ "cite_N": [ "@cite_6" ], "mid": [ "1974196922" ], "abstract": [ "We provide detailed analysis of gray-sheep users problem in recommender systems.We show how conventional collaborative filtering fail for gray-sheep users problem.We use K-means clustering to separate these users from rest of the users.We propose switching hybrid recommender system to overcome this problem. Recommender systems apply data mining and machine learning techniques for filtering unseen information and can predict whether a user would like a given item. This paper focuses on gray-sheep users problem responsible for the increased error rate in collaborative filtering based recommender systems. This paper makes the following contributions: we show that (1) the presence of gray-sheep users can affect the performance - accuracy and coverage - of the collaborative filtering based algorithms, depending on the data sparsity and distribution; (2) gray-sheep users can be identified using clustering algorithms in offline fashion, where the similarity threshold to isolate these users from the rest of community can be found empirically. We propose various improved centroid selection approaches and distance measures for the K-means clustering algorithm; (3) content-based profile of gray-sheep users can be used for making accurate recommendations. We offer a hybrid recommendation algorithm to make reliable recommendations for gray-sheep users. To the best of our knowledge, this is the first attempt to propose a formal solution for gray-sheep users problem. By extensive experimental results on two different datasets (MovieLens and community of movie fans in the FilmTrust website), we showed that the proposed approach reduces the recommendation error rate for the gray-sheep users while maintaining reasonable computational performance." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
Fulan @cite_4 proposed a new Community-based User domain Collaborative Recommendation Algorithm (CUCRA). This algorithm is performed in two section: firstly, it builds the offline user domain model; secondly, it recommends items to target users in the model by applying collaborative filtering. The former section consists of three steps: (1) calculate user similarities using a user-item preference dataset; (2) transform a user-item dataset into user-user social networks with the KNN method; (3) find communities with similar user preferences to define a user domain model using community detection methods. This method has a perfect online performance since it recommends items to users in communities instead of to whole social network. Results showed that the time-complexity of the algorithm was reduced to ).
{ "cite_N": [ "@cite_4" ], "mid": [ "1558143660" ], "abstract": [ "Collaborative Filtering (CF) is a commonly used technique in recommendation systems. It can promote items of interest to a target user from a large selection of available items. It is divided into two broad classes: memory-based algorithms and model-based algorithms. The latter requires some time to build a model but recommends online items quickly, while the former is time-consuming but does not require pre-building time. Considering the shortcomings of the two types of algorithms, we propose a novel Community-based User domain Collaborative Recommendation Algorithm (CUCRA). The idea comes from the fact that recommendations are usually made by users with similar preferences. The first step is to build a user-user social network based on users' preference data. The second step is to find communities with similar user preferences using a community detective algorithm. Finally, items are recommended to users by applying collaborative filtering on communities. Because we recommend items to users in communities instead of to an entire social network, the method has perfect online performance. Applying this method to a collaborative tagging system, experimental results show that the recommendation accuracy of CUCRA is relatively good, and the online time-complexity reduces to O(n)." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
Guo @cite_17 proposed three various approaches from the point of view of preference modeling to alleviate data sparsity and cold start problem. Low accuracy and coverage also are the issues of recommender system which have insufficient ratings. So this work addresses these issues in his proposed method, too. Firstly, it combines the ratings of trusted neighbors and makes a new rating profile for the active users. Secondly, it introduces a new Bayesian similarity measure in order to make better use of user ratings. Thirdly, it eliminates the concerned issues by proposing a new information source based on virtual product experience in virtual reality environments.
{ "cite_N": [ "@cite_17" ], "mid": [ "2070845054" ], "abstract": [ "Our research aims to tackle the problems of data sparsity and cold start of traditional recommender systems. Insufficient ratings often result in poor quality of recommendations in terms of accuracy and coverage. To address these issues, we propose three different approaches from the perspective of preference modelling. Firstly, we propose to merge the ratings of trusted neighbors and thus form a new rating profile for the active users, based on which better recommendations can be generated. Secondly, we aim to make better use of user ratings and introduce a novel Bayesian similarity measure by taking into account both the direction and length of rating vectors. Thirdly, we propose a new information source called prior ratings based on virtual product experience in virtual reality environments, in order to inherently resolve the concerned problems." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
@cite_10 proposed a multi-view clustering method to address the low accuracy and coverage in clustering-based recommender systems. In this method, users are iteratively clustered from the views of both rating patterns and social trust relationships.
{ "cite_N": [ "@cite_10" ], "mid": [ "2117346966" ], "abstract": [ "Although demonstrated to be efficient and scalable to large-scale data sets, clustering-based recommender systems suffer from relatively low accuracy and coverage. To address these issues, we develop a multiview clustering method through which users are iteratively clustered from the views of both rating patterns and social trust relationships. To accommodate users who appear in two different clusters simultaneously, we employ a support vector regression model to determine a prediction for a given item, based on user-, item- and prediction-related features. To accommodate (cold) users who cannot be clustered due to insufficient data, we propose a probabilistic method to derive a prediction from the views of both ratings and trust relationships. Experimental results on three real-world data sets demonstrate that our approach can effectively improve both the accuracy and coverage of recommendations as well as in the cold start situation, moving clustering-based recommender systems closer towards practical use." ] }
1812.09380
2905888739
Recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. This paper introduces a recommender system based on community detection. The recommendation is provided using the local and global similarities between users. The local information is obtained from communities, and the global ones are based on the ratings. Here, a new fuzzy community detection using the personalized PageRank metaphor is introduced. The fuzzy membership values of the users to the communities are utilized to define a similarity measure. The method is evaluated by using two well-known datasets: MovieLens and FilmTrust. The results show that our method outperforms recent recommender systems.
Alizade and Sheugh @cite_19 proposed a multi-view clustering based on Euclidean distance by combining similarity-based distances and trust-based distances. This method reduces low accuracy and coverage in cluster-based recommender systems.
{ "cite_N": [ "@cite_19" ], "mid": [ "2403905660" ], "abstract": [ "In recent years, collaborative filtering (CF) methods are important and widely accepted techniques are available for recommender systems. One of these techniques is user based that produces useful recommendations based on the similarity by the ratings of likeminded users. However, these systems suffer from several inherent shortcomings such as data sparsity and cold start problems. With the development of social network, trust measure introduced as a new approach to overcome the CF problems. On the other hand, trust-aware recommender systems are techniques to make use of trust statements and user personal data in social networks to improve the accuracy of rating prediction for cold start users. In addition, clustering-based recommender systems are other kind of systems that to be efficient and scalable to large-scale data sets but these systems suffer from relatively low accuracy and especially coverage too. Therefore to address these problems, in this paper we proposed a multi-view clustering based on Euclidean distance by combination both similarity view and trust relationships that is including explicit and implicit trusts. In order to analyze the effectiveness of the proposed method we used the real-world FilmTrust dataset. The experimental results on this data sets show that our approach can effectively improve both the accuracy and especially coverage of recommendations as well as in the cold start problem." ] }
1812.09150
2905908122
This paper is devoted to the stochastic approximation of entropically regularized Wasserstein distances between two probability measures, also known as Sinkhorn divergences. The semi-dual formulation of such regularized optimal transportation problems can be rewritten as a non-strongly concave optimisation problem. It allows to implement a Robbins-Monro stochastic algorithm to estimate the Sinkhorn divergence using a sequence of data sampled from one of the two distributions. Our main contribution is to establish the almost sure convergence and the asymptotic normality of a new recursive estimator of the Sinkhorn divergence between two probability measures in the discrete and semi-discrete settings. We also study the rate of convergence of the expected excess risk of this estimator in the absence of strong concavity of the objective function. Numerical experiments on synthetic and real datasets are also provided to illustrate the usefulness of our approach for data analysis.
Obtaining limiting distributions for empirical Wasserstein distances when both @math and @math are absolutely continuous measures has been the subject of various works in asymptotic statistics @cite_31 @cite_25 @cite_30 @cite_22 @cite_5 . For probability measures supported on finite spaces, limiting distributions for empirical Wasserstein distance have been obtained in @cite_38 , while the asymptotic distribution of empirical Sinkhorn divergence has been recently considered in @cite_8 @cite_15 . From a statistical perspective, the results in this paper on the limiting distributions of stochastic algorithms for entropically regularized optimal transport could also lead to new procedures for goodness-of-fit testing between multivariate distributions using a simple algorithm with a low computational cost.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_22", "@cite_8", "@cite_15", "@cite_5", "@cite_31", "@cite_25" ], "mid": [ "2611625744", "2530707152", "2026953971", "2770187377", "2897678613", "2244432928", "2082531187", "2084887650" ], "abstract": [ "We consider the problem of optimal transportation with quadratic cost between a empirical measure and a general target probability on R d , with d @math 1. We provide new results on the uniqueness and stability of the associated optimal transportation potentials , namely, the minimizers in the dual formulation of the optimal transportation problem. As a consequence, we show that a CLT holds for the empirical transportation cost under mild moment and smoothness requirements. The limiting distributions are Gaussian and admit a simple description in terms of the optimal transportation potentials.", "Summary The Wasserstein distance is an attractive tool for data analysis but statistical inference is hindered by the lack of distributional limits. To overcome this obstacle, for probability measures supported on finitely many points, we derive the asymptotic distribution of empirical Wasserstein distances as the optimal value of a linear programme with random objective function. This facilitates statistical inference (e.g. confidence intervals for sample-based Wasserstein distances) in large generality. Our proof is based on directional Hadamard differentiability. Failure of the classical bootstrap and alternatives are discussed. The utility of the distributional results is illustrated on two data sets.", "Semiparametric models to describe the functional relationship between k groups of observations are broadly applied in statistical analysis, ranging from nonparametric ANOVA to proportional hazard (ph) rate models in survival analysis. In this paper we deal with the empirical assessment of the validity of such a model, which will be denoted as a “structural relationship model”. To this end Hadamard differentiability of a suitable goodness-of-fit measure in the k-sample case is proved. This yields asymptotic limit laws which are applied to construct tests for various semiparametric models, including the Cox ph model. Two types of asymptotics are obtained, first when the hypothesis of the semiparametric model under investigation holds true, and second for the case when a fixed alternative is present. The latter result can be used to validate the presence of a semiparametric model instead of simply checking the null hypothesis “the model holds true”. Finally, various bootstrap approximations are numerically investigated and a data example is analyzed.", "The notion of Sinkhorn divergence has recently gained popularity in machine learning and statistics, as it makes feasible the use of smoothed optimal transportation distances for data analysis. The Sinkhorn divergence allows the fast computation of an entropically regularized Wasserstein distance between two probability distributions supported on a finite metric space of (possibly) high-dimension. For data sampled from one or two unknown probability distributions, we derive central limit theorems for empirical Sinkhorn divergences. We also propose a bootstrap procedure which allows to obtain new test statistics for measuring the discrepancies between multivariate probability distributions. The strategy of proof uses the notions of directional Hadamard differentiability and delta-method in this setting. It is inspired by the results in the work of Sommerfeld and Munk (2016) on the asymptotic distribution of empirical Wasserstein distance on finite space using un-regularized transportation costs. Simulated and real datasets are used to illustrate our approach. A comparison with existing methods to measure the discrepancy between multivariate distributions is also proposed.", "We derive limit distributions for certain empirical regularized optimal transport distances between probability distributions supported on a finite metric space and show consistency of the (naive) bootstrap. In particular, we prove that the empirical regularized transport plan itself asymptotically follows a Gaussian law. The theory includes the Boltzmann-Shannon entropy regularization and hence a limit law for the widely applied Sinkhorn divergence. Our approach is based on an application of the implicit function theorem to necessary and sufficient optimality conditions for the regularized transport problem. The asymptotic results are investigated in Monte Carlo simulations. We further discuss computational and statistical applications, e.g. confidence bands for colocalization analysis of protein interaction networks based on regularized optimal transport.", "We derive central limit theorems for the Wasserstein distance between the empirical distributions of Gaussian samples. The cases are distinguished whether the underlying laws are the same or different. Results are based on the (quadratic) Frechet differentiability of the Wasserstein distance in the gaussian case. Extensions to elliptically symmetric distributions are discussed as well as several applications such as bootstrap and statistical testing.", "We consider the Wasserstein distance between a sample distribution and the set of normal distributions as a measure of nonnormality. By considering the standardized version of this distance we obtain a version of Shapiro-Wilk's test of normality. The asymptotic behavior of the statistic is studied using approximations of the quantile process by Brownian bridges. This method differs from the ad hoc method of de Wet and Venter and permits a similar analysis for testing other location scale families.", "" ] }
1812.09134
2808513732
In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.
Wide-ranging robotics research studies have been undertaken in the area of search, exploration, and monitoring, specifically with applications in SAR scenarios @cite_9 @cite_2 @cite_6 . Despite the use of the term 'rescue' in SAR, little attention has been given to the development of a rescue robot that is capable of performing a physical rescue mission, including loading and transporting a victim to a safe zone---a.k.a. casualty extraction.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_2" ], "mid": [ "1996985406", "2107664584", "1969161726" ], "abstract": [ "In this paper, we propose a stochastic differential equation-based exploration algorithm to enable exploration in three-dimensional indoor environments with a payload constrained micro-aerial vehicle (MAV). We are able to address computation, memory, and sensor limitations by considering only the known occupied space in the current map. We determine regions for further exploration based on the evolution of a stochastic differential equation that simulates the expansion of a system of particles with Newtonian dynamics. The regions of most significant particle expansion correlate to unexplored space. After identifying and processing these regions, the autonomous MAV navigates to these locations to enable fully autonomous exploration. The performance of the approach is demonstrated through numerical simulations and experimental results in single and multi-floor indoor experiments.", "Wilderness Search and Rescue (WiSAR) entails searching over large regions in often rugged remote areas. Because of the large regions and potentially limited mobility of ground searchers, WiSAR is an ideal application for using small (human-packable) unmanned aerial vehicles (UAVs) to provide aerial imagery of the search region. This paper presents a brief analysis of the WiSAR problem with emphasis on practical aspects of visual-based aerial search. As part of this analysis, we present and analyze a generalized contour search algorithm, and relate this search to existing coverage searches. Extending beyond laboratory analysis, lessons from field trials with search and rescue personnel indicated the immediate need to improve two aspects of UAV-enabled search: How video information is presented to searchers and how UAV technology is integrated into existing WiSAR teams. In response to the first need, three computer vision algorithms for improving video display presentation are compared; results indicate that constructing temporally localized image mosaics is more useful than stabilizing video imagery. In response to the second need, a goal-directed task analysis of the WiSAR domain was conducted and combined with field observations to identify operational paradigms and field tactics for coordinating the UAV operator, the payload operator, the mission manager, and ground searchers. © 2008 Wiley Periodicals, Inc.", "Search and rescue operations can greatly benefit from the use of autonomous UAVs to survey the environment and collect evidence about the position of a missing person. To minimize the time to find the victim, some fundamental parameters need to be accounted for in the design of the search algorithms: 1) quality of sensory data collected by the UAVs, 2) UAVs energy limitations, 3) environmental hazards (e.g. winds, trees), 4) level of information exchange coordination between UAVs. In this paper, we discuss how these parameters can affect the search task and present some of the research avenues we have been exploring. We then study the performance of different search algorithms when the time to find the victim is the optimization criterion." ] }
1812.09355
2905381038
Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their interpretability remains a challenge. Previous work largely focused on what these models learn at the representation level. We break this analysis down further and study individual dimensions (neurons) in the vector representation learned by end-to-end neural models in NLP tasks. We propose two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation Analysis, an unsupervised method to extract salient neurons w.r.t. the model itself. We evaluate the effectiveness of our techniques by ablating the identified neurons and reevaluating the network's performance for two tasks: neural machine translation (NMT) and neural language modeling (NLM). We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models? ii) are certain neurons exclusive to some properties and not others? iii) is the information more or less distributed in NMT vs. NLM? and iv) how important are the neurons identified through the linguistic correlation method to the overall task? Our code is publicly available as part of the NeuroX toolkit ( 2019).
Much of the previous work has looked into neural models from the perspective of what they learn about various language properties. This includes analyzing word and sentence embeddings @cite_6 @cite_23 @cite_5 , recurrent neural network (RNN) states @cite_3 @cite_27 , and NMT representations @cite_26 @cite_22 @cite_8 . The language properties mainly analyzed are morphological @cite_23 @cite_12 , semantic @cite_23 and syntactic @cite_3 @cite_1 @cite_5 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_1", "@cite_3", "@cite_6", "@cite_27", "@cite_23", "@cite_5", "@cite_12" ], "mid": [ "2605717780", "2773956126", "2773621464", "", "2563574619", "2515741950", "2601836666", "", "2799124508", "2428172939" ], "abstract": [ "Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.", "While neural machine translation (NMT) models provide improved translation quality in an elegant, end-to-end framework, it is less clear what they learn about language. Recent work has started evaluating the quality of vector representations learned by NMT models on morphological and syntactic tasks. In this paper, we investigate the representations learned at different layers of NMT encoders. We train NMT systems on parallel data and use the trained models to extract features for training a classifier on two tasks: part-of-speech and semantic tagging. We then measure the performance of the classifier as a proxy to the quality of the original NMT model for the given task. Our quantitative analysis yields interesting insights regarding representation learning in NMT models. For instance, we find that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. We also observe little effect of the target language on source-side representations, especially with higher quality NMT models.", "", "", "", "There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.", "In this paper we analyze the gate activation signals inside the gated recurrent neural networks, and find the temporal structure of such signals is highly correlated with the phoneme boundaries. This correlation is further verified by a set of experiments for phoneme segmentation, in which better results compared to standard approaches were obtained.", "", "Although much effort has recently been devoted to training high-quality sentence embeddings, we still have a poor understanding of what they are capturing. \"Downstream\" tasks, often based on sentence classification, are commonly used to evaluate the quality of sentence representations. The complexity of the tasks makes it however difficult to infer what kind of information is present in the representations. We introduce here 10 probing tasks designed to capture simple linguistic features of sentences, and we use them to study embeddings generated by three different encoders trained in eight distinct ways, uncovering intriguing properties of both encoders and training methods.", "Dealing with the complex word forms in morphologically rich languages is an open problem in language processing, and is particularly important in translation. In contrast to most modern neural systems of translation, which discard the identity for rare words, in this paper we propose several architectures for learning word representations from character and morpheme level word decompositions. We incorporate these representations in a novel machine translation model which jointly learns word alignments and translations via a hard attention mechanism. Evaluating on translating from several morphologically rich languages into English, we show consistent improvements over strong baseline methods, of between 1 and 1.5 BLEU points." ] }
1812.09276
2951474242
With the fast growth in the visual surveillance and security sectors, thermal infrared images have become increasingly necessary ina large variety of industrial applications. This is true even though IR sensors are still more expensive than their RGB counterpart having the same resolution. In this paper, we propose a deep learning solution to enhance the thermal image resolution. The following results are given:(I) Introduction of a multimodal, visual-thermal fusion model that ad-dresses thermal image super-resolution, via integrating high-frequency information from the visual image. (II) Investigation of different net-work architecture schemes in the literature, their up-sampling methods,learning procedures, and their optimization functions by showing their beneficial contribution to the super-resolution problem. (III) A bench-mark ULB17-VT dataset that contains thermal images and their visual images counterpart is presented. (IV) Presentation of a qualitative evaluation of a large test set with 58 samples and 22 raters which shows that our proposed model performs better against state-of-the-arts.
. Since super-resolution output is similar to the low-resolution input,with the high-frequency information missing, the learning can be made to produce only the residual information. VDSR @cite_1 and DRSN @cite_17 trained a model that learns residual information between LR and HR images. They used a skip connection, that adds the input image to the model residual output, to produce SR image. @cite_21 found that reconstructing the SR image immediately with a high up-sampling scale is a challenging problem. Therefore they addressed the problem in a gradual up-sampling procedure, using deep supervision in each up-sampling scale, and a residual learning as shown in Fig. (f).
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_17" ], "mid": [ "", "2951997238", "2949079773" ], "abstract": [ "", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin." ] }
1812.09276
2951474242
With the fast growth in the visual surveillance and security sectors, thermal infrared images have become increasingly necessary ina large variety of industrial applications. This is true even though IR sensors are still more expensive than their RGB counterpart having the same resolution. In this paper, we propose a deep learning solution to enhance the thermal image resolution. The following results are given:(I) Introduction of a multimodal, visual-thermal fusion model that ad-dresses thermal image super-resolution, via integrating high-frequency information from the visual image. (II) Investigation of different net-work architecture schemes in the literature, their up-sampling methods,learning procedures, and their optimization functions by showing their beneficial contribution to the super-resolution problem. (III) A bench-mark ULB17-VT dataset that contains thermal images and their visual images counterpart is presented. (IV) Presentation of a qualitative evaluation of a large test set with 58 samples and 22 raters which shows that our proposed model performs better against state-of-the-arts.
The optimization procedure seeks to minimize the distance between the original HR image and the generated SR image. The most used optimization function in the SR problem is the content loss, which is done using the MSE as in @cite_1 or Charbonnier as in @cite_21 . SRGAN @cite_26 instead uses the adversarial loss and @cite_12 uses the perceptual similarity loss to enhance the reconstructed image resolution.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_12", "@cite_26" ], "mid": [ "", "2951997238", "2950689937", "2523714292" ], "abstract": [ "", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method." ] }
1812.09336
2905699315
In this project, we worked on speech recognition, specifically predicting individual words based on both the video frames and audio. Empowered by convolutional neural networks, the recent speech recognition and lip reading models are comparable to human level performance. We re-implemented and made derivations of the state-of-the-art model. Then, we conducted rich experiments including the effectiveness of attention mechanism, more accurate residual network as the backbone with pre-trained weights and the sensitivity of our model with respect to audio input with without noise.
Earlier solutions to speech recognition mostly used either classical signal processing techniques or deep learning on only the video data or audio data to do the actual recognition. In the video space, LipNet @cite_33 is one example where a CNN is used with bi-directional GRU's to predict the word being said in the current frame using the sequence of words said before. It then uses these frame wise predictions to determine the optimal sequence of predicted words. Similarly, @cite_4 built multiple CNNs based on the architecture of VGG-M that would use 25 fps to detect words from a sequence of lip movements. @cite_32 also uses spatiotemporal convolutions to generate a prediction for the word being said in the current frame after landmarking and using standard 3D convolutions to augment the input video data.
{ "cite_N": [ "@cite_32", "@cite_4", "@cite_33" ], "mid": [ "2596627958", "2594690981", "2578229578" ], "abstract": [ "We propose an end-to-end deep learning architecture for word-level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We train and evaluate it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size target-words consisting of 1.28sec video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0, yielding 6.8 absolute improvement over the current state-of-the-art, without using information about word boundaries during training or testing.", "Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets.", "Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (, 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2 accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4 word-level state-of-the-art accuracy (, 2016)." ] }
1812.09336
2905699315
In this project, we worked on speech recognition, specifically predicting individual words based on both the video frames and audio. Empowered by convolutional neural networks, the recent speech recognition and lip reading models are comparable to human level performance. We re-implemented and made derivations of the state-of-the-art model. Then, we conducted rich experiments including the effectiveness of attention mechanism, more accurate residual network as the backbone with pre-trained weights and the sensitivity of our model with respect to audio input with without noise.
Deep architectures that use both audio and video data also tend to use LSTM or GRU units for their predictions. This is seen in the encoder-decoder architecture employed in @cite_20 which uses unidirectional LSTMs to encode both the image and audio data and generates attention vectors to predict the word being said. While @cite_32 used only video frames as input, it can be easily extended to incorporate both audio and visual information as seen in @cite_15 . This uses two separate ResNets and BGRUs to extract features and model temporal dependencies from the visual and audio inputs and two additional BGRU's to combine the extracted audio and visual features. @cite_13 uses another approach which uses temporal multimodal networks to learn a joint distribution over a mouth and lip movements along with the audio at every frame. These joint distributions are then combined to get a time-dependent sequence of frames and audio.
{ "cite_N": [ "@cite_15", "@cite_13", "@cite_32", "@cite_20" ], "mid": [ "2787944098", "2474638510", "2596627958", "2952746495" ], "abstract": [ "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream modality are modeled by a 2-layer BGRU and the fusion of multiple streams modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.", "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.", "We propose an end-to-end deep learning architecture for word-level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We train and evaluate it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size target-words consisting of 1.28sec video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0, yielding 6.8 absolute improvement over the current state-of-the-art, without using information about word boundaries during training or testing.", "The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available." ] }
1812.08927
2906528293
Two-sample testing is a fundamental problem in statistics. Despite its long history, there has been renewed interest in this problem with the advent of high-dimensional and complex data. Specifically, in the machine learning literature, there have been recent methodological developments such as classification accuracy tests. The goal of this work is to present a regression approach to comparing multivariate distributions of complex data. Depending on the chosen regression model, our framework can efficiently handle different types of variables and various structures in the data, with competitive power under many practical scenarios. Whereas previous work has been largely limited to global tests which conceal much of the local information, our approach naturally leads to a local two-sample testing framework in which we identify local differences between multivariate distributions with statistical confidence. We demonstrate the efficacy of our approach both theoretically and empirically, under some well-known parametric and nonparametric regression methods. Our proposed methods are applied to simulated data as well as a challenging astronomy data set to assess their practical usefulness.
In recent years, several attempts have been made to connect binary classification with two-sample testing. The main idea of this approach is to check whether the accuracy of a binary classifier is better than chance level and reject the null if the difference is significant. Such an approach, referred to as an accuracy or classification test, was first conceptualized by @cite_7 and has since been investigated by several authors [][] ojala2010permutation, olivetti2015statistical, ramdas2016classification, rosenblatt2016better, gagnon2016classification, lopez2016revisiting . In the same manner as our regression framework, a key strength of the accuracy test is that it offers a flexible way for the two-sample problem as it can utilize any existing classification procedure in the literature. However, the classification accuracy framework is not easily converted to a local two-sample test. In addition, many classifiers are estimated by dichotomizing regression estimates and the discrete nature of such classifiers may result in a less powerful test (see and other simulation results). For the local two-sample test, our approach has similarities to independent work by @cite_8 who estimate the Kullback-Leibler divergence between @math and @math . Our procedure, however, identifies locally significant areas with statistical confidence whereas @cite_8 graphically decide a threshold for the significance.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "362526619", "2187584467" ], "abstract": [ "It is shown how classification learning machines can be used to do multivariate goodness-of-fit and two-sample testing.", "Comparing two sets of multivariate samples is a central problem in data analysis. From a statistical standpoint, the simplest way to perform such a comparison is to resort to a non-parametric two-sample test (TST), which checks whether the two sets can be seen as i.i.d. samples of an identical unknown distribution (the null hypothesis). If the null is rejected, one wishes to identify regions accounting for this difference. This paper presents a two-stage method providing feedback on this difference, based upon a combination of statistical learning (regression) and computational topology methods. Consider two populations, each given as a point cloud in Rd. In the first step, we assign a label to each set and we compute, for each sample point, a discrepancy measure based on comparing an estimate of the conditional probability distribution of the label given a position versus the global unconditional label distribution. In the second step, we study the height function defined at each point by the aforementioned estimated discrepancy. Topological persistence is used to identify persistent local minima of this height function, their basins defining regions of points with high discrepancy and in spatial proximity. Experiments are reported both on synthetic and real data (satellite images and handwritten digit images), ranging in dimension from d = 2 to d = 784, illustrating the ability of our method to localize discrepancies. On a general perspective, the ability to provide feedback downstream TST may prove of ubiquitous interest in exploratory statistics and data science." ] }
1812.08781
2903787679
We study object recognition under the constraint that each object class is only represented by very few observations. Semi-supervised learning, transfer learning, and few-shot recognition all concern with achieving fast generalization with few labeled data. In this paper, we propose a generic framework that utilizes unlabeled data to aid generalization for all three tasks. Our approach is to create much more training data through label propagation from the few labeled examples to a vast collection of unannotated images. The main contribution of the paper is that we show such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. We test various combinations of supervised and unsupervised metric learning methods with various label propagation algorithms. We find that our framework is very generic without being sensitive to any specific techniques. By taking advantage of unlabeled data in this way, we achieve significant improvements on all three tasks.
To solve a computer vision problem, it has become a common practice to build a large-scale dataset @cite_3 @cite_6 and train deep neural networks @cite_0 @cite_8 on it. This philosophy has achieved unprecedented success on many important computer vision problems @cite_3 @cite_23 @cite_26 . However, constructing a large-scale dataset is often time-consuming and expensive, and this has motivated work on unsupervised learning and problems defined on few labeled samples.
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_6", "@cite_3", "@cite_0", "@cite_23" ], "mid": [ "2117539524", "1686810756", "2108598243", "2031489346", "2163605009", "1861492603" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model." ] }
1812.08781
2903787679
We study object recognition under the constraint that each object class is only represented by very few observations. Semi-supervised learning, transfer learning, and few-shot recognition all concern with achieving fast generalization with few labeled data. In this paper, we propose a generic framework that utilizes unlabeled data to aid generalization for all three tasks. Our approach is to create much more training data through label propagation from the few labeled examples to a vast collection of unannotated images. The main contribution of the paper is that we show such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. We test various combinations of supervised and unsupervised metric learning methods with various label propagation algorithms. We find that our framework is very generic without being sensitive to any specific techniques. By taking advantage of unlabeled data in this way, we achieve significant improvements on all three tasks.
Semi-supervised learning @cite_42 is a problem that lies in between supervised learning and unsupervised learning. It aims to make more accurate predictions by leveraging a large amount of unlabeled data than by relying on the labeled data alone. In the era of deep learning, one line of work leverages unlabeled data through deep generative models @cite_9 @cite_5 . However, training of generative models is often unstable, making it tricky to work with recognition tasks. Recent efforts on semi-supervised learning focus on regularization by self-ensembling through consistency loss, such as temporal ensembling @cite_27 , adversarial ensembling @cite_43 , and teacher-student distillation @cite_16 . These models treat labeled data and unlabeled data separately without considering their relationships. The pseudo-labeling approach @cite_25 @cite_35 initializes a model on a smalled labeled dataset and bootstraps on the new data it predicts. This tends to fail when the labeled set is small. Our work is most closely related to label propagation approaches @cite_31 @cite_4 , and we propose metric transfer to significantly improve the propagation performance.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_9", "@cite_42", "@cite_43", "@cite_27", "@cite_5", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "", "2074668987", "2949416428", "2407712691", "2606711863", "2951970475", "830076066", "2122457239", "2592691248", "" ], "abstract": [ "", "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.", "We show how nonlinear embedding algorithms popular for use with \"shallow\" semi-supervised learning techniques such as kernel methods can be easily applied to deep multi-layer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This trick provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.", "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.", "We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.", "With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. \"Clean labels\" can be manually obtained on a small fraction, \"noisy labels\" may be extracted automatically from surrounding text, while for most images there are no labels at all. Semi-supervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning that are linear in the number of images. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images gathered from the Internet.", "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .", "" ] }
1812.08781
2903787679
We study object recognition under the constraint that each object class is only represented by very few observations. Semi-supervised learning, transfer learning, and few-shot recognition all concern with achieving fast generalization with few labeled data. In this paper, we propose a generic framework that utilizes unlabeled data to aid generalization for all three tasks. Our approach is to create much more training data through label propagation from the few labeled examples to a vast collection of unannotated images. The main contribution of the paper is that we show such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. We test various combinations of supervised and unsupervised metric learning methods with various label propagation algorithms. We find that our framework is very generic without being sensitive to any specific techniques. By taking advantage of unlabeled data in this way, we achieve significant improvements on all three tasks.
Given some training data in training categories, few-shot recognition @cite_28 requires the classifier to generalize to new categories from observing very few examples, often 1-shot or 5-shot. A body of work approaches this problem by offline metric learning @cite_32 @cite_39 @cite_18 , where a generic similarity metric is learned on the training data and directly transferred to the new categories using simple nearest neighbor classifiers without further adaptation. Recent works on meta-learning @cite_11 @cite_12 @cite_40 take a learning-to-learn approach using online algorithms. In order not to overfit to the few examples, they develop meta-learners to find a common embedding space, which can be further finetuned with fast convergence to the target problem. Recent works @cite_1 @cite_37 using meta-learning consider the combined problem of semi-supervised learning and few-shot recognition, by allowing access to unlabeled data in few-shot recognition. This drives few-shot recognition into more realistic scenarios. We follow this setting as we study few-shot recognition.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_28", "@cite_1", "@cite_32", "@cite_39", "@cite_40", "@cite_12", "@cite_11" ], "mid": [ "2949442616", "2770468159", "2144209400", "2787035179", "2963341924", "2601450892", "2951881474", "2742093937", "2951775809" ], "abstract": [ "Current major approaches to visual recognition follow an end-to-end formulation that classifies an input image into one of the pre-determined set of semantic categories. Parametric softmax classifiers are a common choice for such a closed world with fixed categories, especially when big labeled data is available during training. However, this becomes problematic for open-set scenarios where new categories are encountered with very few examples for learning a generalizable parametric classifier. We adopt a non-parametric approach for visual recognition by optimizing feature embeddings instead of parametric classifiers. We use a deep neural network to learn the visual feature that preserves the neighborhood structure in the semantic space, based on the Neighborhood Component Analysis (NCA) criterion. Limited by its computational bottlenecks, we devise a mechanism to use augmented memory to scale NCA for large datasets and very deep networks. Our experiments deliver not only remarkable performance on ImageNet classification for such a simple non-parametric method, but most importantly a more generalizable feature representation for sub-category discovery and few-shot recognition.", "We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on 'relational' tasks.", "", "In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (, 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would.", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.", "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.", "Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.", "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies." ] }
1812.08781
2903787679
We study object recognition under the constraint that each object class is only represented by very few observations. Semi-supervised learning, transfer learning, and few-shot recognition all concern with achieving fast generalization with few labeled data. In this paper, we propose a generic framework that utilizes unlabeled data to aid generalization for all three tasks. Our approach is to create much more training data through label propagation from the few labeled examples to a vast collection of unannotated images. The main contribution of the paper is that we show such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. We test various combinations of supervised and unsupervised metric learning methods with various label propagation algorithms. We find that our framework is very generic without being sensitive to any specific techniques. By taking advantage of unlabeled data in this way, we achieve significant improvements on all three tasks.
Since the inception of the ImageNet challenge @cite_26 , transfer learning has emerged almost everywhere in visual recognition, such as in object detection @cite_21 and semantic segmentation @cite_29 , by simply transferring the network weights learned on ImageNet classification and finetuning on the target task. When the pretraining task and the target task are closely related, this tends to generalize much better than training from scratch on the target task alone. Domain adaptation seeks to address a much more difficult scenario where there is a large gap between the inputs of the source and target domains @cite_41 , for example, between real images and synthetic images. What we study in this paper is metric transfer. Different from prior work @cite_10 that employ metric transfer just to reduce the distribution divergence of different domains, we use metric transfer to propagate labels. Through this, we show that metric propagation is an effective method for learning with small data.
{ "cite_N": [ "@cite_26", "@cite_41", "@cite_29", "@cite_21", "@cite_10" ], "mid": [ "2117539524", "2767657961", "1903029394", "2102605133", "2588646734" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods." ] }
1812.08848
2906075167
The Saliency Model Implementation Library for Experimental Research (SMILER) is a new software package which provides an open, standardized, and extensible framework for maintaining and executing computational saliency models. This work drastically reduces the human effort required to apply saliency algorithms to new tasks and datasets, while also ensuring consistency and procedural correctness for results and conclusions produced by different parties. At its launch SMILER already includes twenty three saliency models (fourteen models based in MATLAB and nine supported through containerization), and the open design of SMILER encourages this number to grow with future contributions from the community. The project may be downloaded and contributed to through its GitHub page: this https URL
It should be noted that the current collection of models supported by SMILER consists of models which focus on pixel-wise assignment of conspicuity values and which have been predominantly applied to the domain of human fixation prediction. There are, however, other branches of saliency model research, such as salient object detection ( see @cite_27 for an early example, and @cite_46 for an overview and recent survey). Likewise, the models included are predominantly focused on saliency prediction over static scenes, but there is nevertheless significant interest in saliency over dynamic stimuli ( see @cite_19 @cite_12 @cite_59 ). This focus on models which are more representative of fixation prediction over static images is not intended to dismiss or ignore these other research avenues, but rather is meant to form a solid base for the SMILER platform.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_59", "@cite_46", "@cite_12" ], "mid": [ "1970342316", "2130502991", "1782414784", "", "2164720308" ], "abstract": [ "This paper presents a spatio-temporal saliency model that predicts eye movement during video free viewing. This model is inspired by the biology of the first steps of the human visual system. The model extracts two signals from video stream corresponding to the two main outputs of the retina: parvocellular and magnocellular. Then, both signals are split into elementary feature maps by cortical-like filters. These feature maps are used to form two saliency maps: a static and a dynamic one. These maps are then fused into a spatio-temporal saliency map. The model is evaluated by comparing the salient areas of each frame predicted by the spatio-temporal saliency map to the eye positions of different subjects during a free video viewing experiment with a large database (17000 frames). In parallel, the static and the dynamic pathways are analyzed to understand what is more or less salient and for what type of videos our model is a good or a poor predictor of eye movement.", "We present a novel computational model to explore the relatedness of objectness and saliency, each of which plays an important role in the study of visual attention. The proposed framework conceptually integrates these two concepts via constructing a graphical model to account for their relationships, and concurrently improves their estimation by iteratively optimizing a novel energy function realizing the model. Specifically, the energy function comprises the objectness, the saliency, and the interaction energy, respectively corresponding to explain their individual regularities and the mutual effects. Minimizing the energy by fixing one or the other would elegantly transform the model into solving the problem of objectness or saliency estimation, while the useful information from the other concept can be utilized through the interaction term. Experimental results on two benchmark datasets demonstrate that the proposed model can simultaneously yield a saliency map of better quality and a more meaningful objectness output for salient object detection.", "Early delineation of the most salient portions of a temporal image stream (e.g., a video) could serve to guide subsequent processing to the most important portions of the data at hand. Toward such ends, the present paper documents an algorithm for spatiotemporal salience detection. The algorithm is based on a definition of salient regions as those that differ from their surrounding regions, with the individual regions characterized in terms of 3D, (x,y,t), measurements of visual spacetime orientation. The algorithm has been implemented in software and evaluated empirically on a publically available database for visual salience detection. The results show that the algorithm outperforms a variety of alternative algorithms and even approaches human performance.", "", "A spatiotemporal saliency algorithm based on a center-surround framework is proposed. The algorithm is inspired by biological mechanisms of motion-based perceptual grouping and extends a discriminant formulation of center-surround saliency previously proposed for static imagery. Under this formulation, the saliency of a location is equated to the power of a predefined set of features to discriminate between the visual stimuli in a center and a surround window, centered at that location. The features are spatiotemporal video patches and are modeled as dynamic textures, to achieve a principled joint characterization of the spatial and temporal components of saliency. The combination of discriminant center-surround saliency with the modeling power of dynamic textures yields a robust, versatile, and fully unsupervised spatiotemporal saliency algorithm, applicable to scenes with highly dynamic backgrounds and moving cameras. The related problem of background subtraction is treated as the complement of saliency detection, by classifying nonsalient (with respect to appearance and motion dynamics) points in the visual field as background. The algorithm is tested for background subtraction on challenging sequences, and shown to substantially outperform various state-of-the-art techniques. Quantitatively, its average error rate is almost half that of the closest competitor." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Extensive research has been performed on the problem of recognizing emotions from visual information. Most work follow psychological theories that lay out a fixed number of emotion categories, such as Ekman's six pan-cultural basic emotions @cite_60 @cite_2 and Plutchik's wheel of emotion @cite_62 . These emotions are considered to be basic'' because they are associated with prototypical and widely recognized facial expressions, verbal and non-verbal language, and have distinct appraisal, antecedent events, and physiological responses. The emotions constantly affect our expression and perception via appraisal-behavior cycles throughout our daily activities, including video production and consumption.
{ "cite_N": [ "@cite_62", "@cite_2", "@cite_60" ], "mid": [ "2321825897", "2126181565", "2046677541" ], "abstract": [ "", "How do emotions and moods color cognition? In this article, we examine how such reactions influence both judgments and cognitive performance. We argue that many affective influences are due, not to affective reactions themselves, but to the information they carry about value. The specific kind of influence that occurs depends on the focus of the agent at the time. When making evaluative judgments, for example, an agent's positive affect may emerge as a positive attitude toward a person or object. But when an agent focuses on a cognitive task, positive affect may act like feedback about the value of one's approach. As a result, positive affect tends to promote cognitive, relational processes, whereas negative affect tends to inhibit relational processing, resulting in more perceptual, stimulus-specific processing. As a consequence, many textbook phenomena from cognitive psychology occur readily in happy moods, but are inhibited or even absent in sad moods (149).", "Wrist joint prosthesis. As a replacement for the joint in a human wrist, there is provided a prosthesis permitting both vertical motion, sidewise motion and rotary motion but preventing twisting motion around an axis projecting from and parallel with the lower forearm. For this purpose, there is provided a met al socket fitted with a prong receivable into a bone of the forearm. A plastic cup made of material self-lubricating with respect to such met al socket is fitted within said socket, snappable thereinto to resist but not prevent withdrawal therefrom and having a rectangular projection-and-slot relationship with said met al socket to permit relative motion with respect thereto in only a single plane. A met al ball is receivable into the recess of said plastic cup, snappable therewith to resist but not prevent withdrawal therefrom and has a rectangular projection-and-slot relationship with the inside of said plastic cup to permit relative movement with respect thereto in only a single plane, said plane being substantially perpendicular to said first-named plane. Suitable projections are provided on said ball for reception into the bones of selected fingers, normally the index and middle finger. Placement of said prosthesis in one position or a mirror image thereof will render said prosthesis without other change adaptable for use with one hand or the other hand as desired." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Dimensional theories of emotion @cite_11 @cite_31 @cite_65 characterize emotions as points in a multi-dimensional space. This direction is theoretically appealing as it allows richer emotion descriptions than the basic categories. Early work almost exclusively use the two dimensions of valence and arousal @cite_11 , whereas more recent theories have proposed three @cite_65 or four dimensions @cite_31 . To date, most computational approaches that adopt the dimensional view @cite_23 @cite_22 @cite_38 employ valence and arousal. Notably, @cite_7 proposes a three dimensional model for movie recommendation, where the dimensions include passionate vs. reflective, fast faced vs. slow paced and high vs. low energy.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_7", "@cite_65", "@cite_23", "@cite_31", "@cite_11" ], "mid": [ "2044807399", "2766925079", "2124801089", "1999937463", "2765291577", "2156848952", "2149628368" ], "abstract": [ "Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity. All excerpts are shared under creative commons licenses and can thus be freely distributed without copyright issues. Affective annotations were achieved using crowdsourcing through a pair-wise video comparison protocol, thereby ensuring that annotations are fully consistent, as testified by a high inter-annotator agreement, despite the large diversity of raters’ cultural backgrounds. In addition, to enable fair comparison and landmark progresses of future affective computational models, we further provide four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features. The dataset (the video clips, annotations, features and protocols) is publicly available at: http: liris-accede.ec-lyon.fr .", "The continuous dimensional emotion can depict subtlety and complexity of emotional change, which is an inherently challenging problem with growing attention. This paper presents our automatic prediction of dimensional emotional state for Audio-Visual Emotion Challenge (AVEC 2017), which uses multi-features and fusion across all available modalities. Besides the baseline features provided by the organizers, we also extract other acoustic audio feature sets, appearance features and deep visual features as complementary features. Each type of feature is trained using Long Short-Term Memory Recurrent Neutral Network (LSTM-RNN) for every dimensional emotion prediction separately considering annotation delay and temporal pooling. To overcome overfitting problem, robust models are chosen carefully for individual model. Finally, multimodal emotion fusion is achieved by utilizing Support Vector Regression (SVR) with the estimates from different feature sets in decision level fusion. The experimental results indicate that our extracted features are beneficial to performance improvement and our system design achieves very promising results with Concordant Correlation Coefficient (CCC), which outperform the baseline system on the testing set for arousal of 0.599 vs 0.375 (baseline) and for valence of 0.721 vs 0.466 and for liking 0.295 vs 0.246.", "The problem of relating media content to users' affective responses is here addressed. Previous work suggests that a direct mapping of audio-visual properties into emotion categories elicited by films is rather difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of video features and the subjective sphere of emotions, we propose to shift the representation towards the connotative properties of movies, in a space inter-subjectively shared among users. Consequently, the connotative space allows to define, relate, and compare affective descriptions of film videos on equal footing. An extensive test involving a significant number of users watching famous movie scenes suggests that the connotative space can be related to affective categories of a single user. We apply this finding to reach high performance in meeting user's emotional preferences.", "The monoamines serotonin, dopamine and noradrenaline have a great impact on mood, emotion and behavior. This article presents a new three-dimensional model for monoamine neurotransmitters and emoti ...", "Automatic emotion recognition is a challenging task which can make great impact on improving natural human computer interactions. In this paper, we present our effort for the Affect Subtask in the Audio Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals. We highlight three aspects of our solutions: 1) we explore and fuse different hand-crafted and deep learned features from all available modalities including acoustic, visual, and textual modalities, and we further consider the interlocutor influence for the acoustic features; 2) we compare the effectiveness of non-temporal model SVR and temporal model LSTM-RNN and show that the LSTM-RNN can not only alleviate the feature engineering efforts such as construction of contextual features and feature delay, but also improve the recognition performance significantly; 3) we apply multi-task learning strategy for collaborative prediction of multiple emotion dimensions with shared representations according to the fact that different emotion dimensions are correlated with each other. Our solutions achieve the CCC of 0.675, 0.756 and 0.509 on arousal, valence, and likability respectively on the challenge testing set, which outperforms the baseline system with corresponding CCC of 0.375, 0.466, and 0.246 on arousal, valence, and likability.", "For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages that four dimensions are needed to satisfactorily represent similarities and differences in the meaning of emotion words. In order of importance, these dimensions are evaluationpleasantness, potency-control, activation-arousal, and unpredictability. They were identified on the basis of the applicability of 144 features representing the six components of emotions: (a) appraisals of events, (b) psychophysiological changes, (c) motor expressions, (d) action tendencies, (e) subjective experiences, and (f) emotion regulation.", "" ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Various researchers explored features for visual emotion recognition, such as features enlightened by psychology and art theory @cite_46 and shape features @cite_24 . A classifier such as a support vector machine (SVM) or K-nearest neighbors (KNN) is trained to distinguish video's emotions. @cite_14 adapted a variant of SVM with various audio-visual features to divide 2040 frames of 36 Hollywood movies into 7 emotions. @cite_39 focused on animated GIF files, which are similar to short video clips. @cite_25 used KNN to classify music video clips.
{ "cite_N": [ "@cite_14", "@cite_39", "@cite_24", "@cite_46", "@cite_25" ], "mid": [ "2156709807", "2017411072", "2085940040", "2003856922", "2146104196" ], "abstract": [ "Affective understanding of film plays an important role in sophisticated movie analysis, ranking and indexing. However, due to the seemingly inscrutable nature of emotions and the broad affective gap from low-level features, this problem is seldom addressed. In this paper, we develop a systematic approach grounded upon psychology and cinematography to address several important issues in affective understanding. An appropriate set of affective categories are identified and steps for their classification developed. A number of effective audiovisual cues are formulated to help bridge the affective gap. In particular, a holistic method of extracting affective information from the multifaceted audio stream has been introduced. Besides classifying every scene in Hollywood domain movies probabilistically into the affective categories, some exciting applications are demonstrated. The experimental results validate the proposed approach and the efficacy of the audiovisual cues.", "Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning.", "We investigated how shape features in natural images influence emotions aroused in human beings. Shapes and their characteristics such as roundness, angularity, simplicity, and complexity have been postulated to affect the emotional responses of human beings in the field of visual arts and psychology. However, no prior research has modeled the dimensionality of emotions aroused by roundness and angularity. Our contributions include an in depth statistical analysis to understand the relationship between shapes and emotions. Through experimental results on the International Affective Picture System (IAPS) dataset we provide evidence for the significance of roundness-angularity and simplicity-complexity on predicting emotional content in images. We combine our shape features with other state-of-the-art features to show a gain in prediction and classification accuracy. We model emotions from a dimensional perspective in order to predict valence and arousal ratings which have advantages over modeling the traditional discrete emotional categories. Finally, we distinguish images with strong emotional content from emotionally neutral images with high accuracy.", "Images can affect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content. We investigate and develop methods to extract and combine low-level features that represent the emotional content of an image, and use these for image emotion classification. Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression. For testing and training, we use three data sets: the International Affective Picture System (IAPS); a set of artistic photography from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification); and a set of peer rated abstract paintings to investigate the influence of the features and ratings on pictures without contextual content. Improved classification results are obtained on the International Affective Picture System (IAPS), compared to state of the art work.", "Nowadays, the amount of multimedia contents is explosively increasing and it is often a challenging problem to find a content that will be appealing or matches users' current mood or affective state. In order to achieve this goal, an effcient indexing technique should be developed to annotate multi-media contents such that these annotations can be used in a retrieval process using an appropriate query. One approach to such indexing techniques is to determine the affect(type and intensity), which can be induced in a user while consuming multimedia. In this paper, affective content analysis of music video clips is performed to determine the emotion they can induce in people. To this end, a subjective test was developed, where 32 participants watched different music video clips and assessed their induced emotions. These self assessments were used as ground-truth and the results of classification using audio, visual and audiovisual features extracted from music video clips are presented and compared." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Since facial expressions are important expressions of emotion, many researchers focused on recognizing emotions from facial expressions. @cite_20 paid close attention to viewers' facial signals for detection. @cite_45 extracted viewers' facial activities frame by frame and drew an emotional curve to classify each video into different sections. @cite_37 create features by localizing facial muscular regions. @cite_9 construct expressionlet, a mid-level representation for dynamic facial expression recognition.
{ "cite_N": [ "@cite_9", "@cite_37", "@cite_45", "@cite_20" ], "mid": [ "2134860945", "2339620988", "2120856140", "2161809425" ], "abstract": [ "Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals. For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. expressionlet. Specifically, our method contains three key components: 1) each expression video clip is modeled as a spatio-temporal manifold (STM) formed by dense low-level features, 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local ST modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UMM, and the corresponding expressionlet is constructed by modeling the variations in each local ST mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and AFEW. In all cases, our method reports results better than the known state-of-the-art.", "Facial expression is an important channel for human nonverbal communication. This paper presents a novel and effective approach to automatic 3D 4D facial expression recognition based on the muscular movement model (MMM). In contrast to most of existing methods, the MMM deals with such an issue in the viewpoint of anatomy. It first automatically segments the input 3D face (frame) by localizing the corresponding points within each muscular region of the reference using iterative closest normal point. A set of features with multiple differential quantities, including @math , @math and @math values, are then extracted to describe the geometry deformation of each segmented region. Meanwhile, we analyze the importance of these muscular areas, and a score level fusion strategy is exploited to optimize their weights by the genetic algorithm in the learning step. The support vector machine and the hidden Markov model are finally used to predict the expression label in 3D and 4D, respectively. The experiments are conducted on the BU-3DFE and BU-4DFE databases, and the results achieved clearly demonstrate the effectiveness of the proposed method.", "Most previous works on video indexing and recommendation were only based on the content of video itself, without considering the affective analysis of viewers, which is an efficient and important way to reflect viewers' attitudes, feelings and evaluations of videos. In this paper, we propose a novel method to index and recommend videos based on affective analysis, mainly on facial expression recognition of viewers. We first build a facial expression recognition classifier by embedding the process of building compositional Haar-like features into hidden conditional random fields (HCRFs). Then we extract viewers' facial expressions frame by frame through the videos, collected from the camera when viewers are watching videos, to obtain the affections of viewers. Finally, we draw the affective curve which tells the process of affection changes. Through the curve, we segment each video into affective sections, give the indexing result of the videos, and list recommendation points from views' aspect. Experiments on our collected database from the web show that the proposed method has a promising performance.", "This paper presents an approach to affective video summarisation based on the facial expressions (FX) of viewers. A facial expression recognition system was deployed to capture a viewer's face and his her expressions. The user's facial expressions were analysed to infer personalised affective scenes from videos. We proposed two models, pronounced level and expression's change rate, to generate affective summaries using the FX data. Our result suggested that FX can be a promising source to exploit for affective video summaries that can be tailored to individual preferences." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Deep neural networks have also been used for visual sentiment analysis @cite_56 @cite_58 . A massive scale of visual sentiment dataset was proposed in Sentibank @cite_58 and DeepSentiBank @cite_55 . Sentibank is composed of 1,533 adjective-noun pairs, such as happy dog'' and beautiful sky''. Subsequently, the authors used deep convolutional neural networks (CNN) dealing with images of strong sentiment and achieved better performance than the former models.
{ "cite_N": [ "@cite_55", "@cite_58", "@cite_56" ], "mid": [ "1784731433", "2075456404", "2963992782" ], "abstract": [ "This paper introduces a visual sentiment concept classification method based on deep convolutional neural networks (CNNs). The visual sentiment concepts are adjective noun pairs (ANPs) automatically discovered from the tags of web photos, and can be utilized as effective statistical cues for detecting emotions depicted in the images. Nearly one million Flickr images tagged with these ANPs are downloaded to train the classifiers of the concepts. We adopt the popular model of deep convolutional neural networks which recently shows great performance improvement on classifying large-scale web-based image dataset such as ImageNet. Our deep CNNs model is trained based on Caffe, a newly developed deep learning framework. To deal with the biased training data which only contains images with strong sentiment and to prevent overfitting, we initialize the model with the model weights trained from ImageNet. Performance evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called DeepSentiBank) is significantly improved in both annotation accuracy and retrieval performance, compared to its predecessors which mainly use binary SVM classification models.", "We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments. Our key contribution is two-fold: first, we present a method built upon psychological theories and web mining to automatically construct a large-scale Visual Sentiment Ontology (VSO) consisting of more than 3,000 Adjective Noun Pairs (ANP). Second, we propose SentiBank, a novel visual concept detector library that can be used to detect the presence of 1,200 ANPs in an image. The VSO and SentiBank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis. Experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed SentiBank based predictors with the text-based approaches. The effort also leads to a large publicly available resource consisting of a visual sentiment ontology, a large detector library, and the training testing benchmark for visual sentiment analysis.", "Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
The emotional content in videos can be recognized from visual features, audio features and their combination. A number of work attempted to recognize emotionsand affects from speech @cite_29 @cite_47 @cite_13 . @cite_26 jointly uses speech and facial expressions. @cite_12 extracts mid-level audio-visual features. @cite_30 employs the visual, auditory, and textual modalities for video retrieval. @cite_8 provides a comprehensive technique that exploit audio, facial expressions, spatial-temporal information and mouth movements. Sparse coding @cite_48 @cite_61 also prove to be effective for emotion recognition. For a recent survey, we refer interested readers to @cite_43 .
{ "cite_N": [ "@cite_30", "@cite_61", "@cite_26", "@cite_8", "@cite_48", "@cite_29", "@cite_43", "@cite_47", "@cite_13", "@cite_12" ], "mid": [ "1930223417", "2053233027", "2168053878", "2081835714", "2210641851", "2110052520", "2173163709", "2087618018", "2766272105", "2414603974" ], "abstract": [ "Social media has been a convenient platform for voicing opinions through posting messages, ranging from tweeting a short text to uploading a media file, or any combination of messages. Understanding the perceived emotions inherently underlying these user-generated contents (UGC) could bring light to emerging applications such as advertising and media analytics. Existing research efforts on affective computation are mostly dedicated to single media, either text captions or visual content. Few attempts for combined analysis of multiple media are made, despite that emotion can be viewed as an expression of multimodal experience. In this paper, we explore the learning of highly non-linear relationships that exist among low-level features across different modalities for emotion prediction. Using the deep Bolzmann machine (DBM), a joint density model over the space of multimodal inputs, including visual, auditory, and textual modalities, is developed. The model is trained directly using UGC data without any labeling efforts. While the model learns a joint representation over multimodal inputs, training samples in absence of certain modalities can also be leveraged. More importantly, the joint representation enables emotion-oriented cross-modal retrieval, for example, retrieval of videos using the text query “crazy cat”. The model does not restrict the types of input and output, and hence, in principle, emotion prediction and retrieval on any combinations of media are feasible. Extensive experiments on web videos and images show that the learnt joint representation could be very compact and be complementary to hand-crafted features, leading to performance improvement in both emotion classification and cross-modal retrieval.", "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.", "The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to human-computer interaction (HCI). In this paper, we present our efforts toward audio-visual affect recognition on 11 affective states customized for HCI application (four cognitive motivational and seven basic affective states) of 20 nonactor subjects. A smoothing method is proposed to reduce the detrimental influence of speech on facial expression recognition. The feature selection analysis shows that subjects are prone to use brow movement in face, pitch and energy in prosody to express their affects while speaking. For person-dependent recognition, we apply the voting method to combine the frame-based classification results from both audio and visual channels. The result shows 7.5 improvement over the best unimodal performance. For person-independent test, we apply multistream HMM to combine the information from multiple component streams. This test shows 6.1 improvement over the best component performance", "In this paper we present the techniques used for the University of Montreal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58 . Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03 on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56 .", "With the development of video-sharing websites, P2P, micro-blog, mobile WAP websites, and so on, sensitive videos can be more easily accessed. Effective sensitive video recognition is necessary for web content security. Among web sensitive videos, this paper focuses on violent and horror videos. Based on color emotion and color harmony theories, we extract visual emotional features from videos. A video is viewed as a bag and each shot in the video is represented by a key frame which is treated as an instance in the bag. Then, we combine multi-instance learning (MIL) with sparse coding to recognize violent and horror videos. The resulting MIL-based model can be updated online to adapt to changing web environments. We propose a cost-sensitive context-aware multi- instance sparse coding (MI-SC) method, in which the contextual structure of the key frames is modeled using a graph, and fusion between audio and visual features is carried out by extending the classic sparse coding into cost-sensitive sparse coding. We then propose a multi-perspective multi- instance joint sparse coding (MI-J-SC) method that handles each bag of instances from an independent perspective, a contextual perspective, and a holistic perspective. The experiments demonstrate that the features with an emotional meaning are effective for violent and horror video recognition, and our cost-sensitive context-aware MI-SC and multi-perspective MI-J-SC methods outperform the traditional MIL methods and the traditional SVM and KNN-based methods.", "In this contribution we introduce speech emotion recognition by use of continuous hidden Markov models. Two methods are propagated and compared throughout the paper. Within the first method a global statistics framework of an utterance is classified by Gaussian mixture models using derived features of the raw pitch and energy contour of the speech signal. A second method introduces increased temporal complexity applying continuous hidden Markov models considering several states using low-level instantaneous features instead of global statistics. The paper addresses the design of working recognition engines and results achieved with respect to the alluded alternatives. A speech corpus consisting of acted and spontaneous emotion samples in German and English language is described in detail. Both engines have been tested and trained using this equivalent speech corpus. Results in recognition of seven discrete emotions exceeded 86 recognition rate. As a basis of comparison the similar judgment of human deciders classifying the same corpus at 79.8 recognition rate was analyzed.", "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user’s spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users’ spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis . Lastly, we identify several challenges in this field and put forward recommendations for future research.", "As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect- related , discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features.", "Speech emotion recognition is challenging because of the affective gap between the subjective emotions and low-level features. Integrating multilevel feature learning and model training, deep convolutional neural networks (DCNN) has exhibited remarkable success in bridging the semantic gap in visual tasks like image classification, object detection. This paper explores how to utilize a DCNN to bridge the affective gap in speech signals. To this end, we first extract three channels of log Mel-spectrograms (static, delta, and delta delta) similar to the red, green, blue (RGB) image representation as the DCNN input. Then, the AlexNet DCNN model pretrained on the large ImageNet dataset is employed to learn high-level feature representations on each segment divided from an utterance. The learned segment-level features are aggregated by a discriminant temporal pyramid matching (DTPM) strategy. DTPM combines temporal pyramid matching and optimal Lp-norm pooling to form a global utterance-level feature representation, followed by the linear support vector machines for emotion classification. Experimental results on four public datasets, that is, EMO-DB, RML, eNTERFACE05, and BAUM-1s, show the promising performance of our DCNN model and the DTPM strategy. Another interesting finding is that the DCNN model pretrained for image applications performs reasonably good in affective speech feature extraction. Further fine tuning on the target emotional speech datasets substantially promotes recognition performance.", "In today's society where audio-visual content such as professionally edited and user-generated videos is ubiquitous, automatic analysis of this content is a decisive functionality. Within this context, there is an extensive ongoing research about understanding the semantics (i.e., facts) such as objects or events in videos. However, little research has been devoted to understanding the emotional content of the videos. In this paper, we address this issue and introduce a system that performs emotional content analysis of professionally edited and user-generated videos. We concentrate both on the representation and modeling aspects. Videos are represented using mid-level audio-visual features. More specifically, audio and static visual representations are automatically learned from raw data using convolutional neural networks (CNNs). In addition, dense trajectory based motion and SentiBank domain-specific features are incorporated. By means of ensemble learning and fusion mechanisms, videos are classified into one of predefined emotion categories. Results obtained on the VideoEmotion dataset and a subset of the DEAP dataset show that (1) higher level representations perform better than low-level features, (2) among audio features, mid-level learned representations perform better than mid-level handcrafted ones, (3) incorporating motion and domain-specific information leads to a notable performance gain, and (4) ensemble learning is superior to multi-class support vector machines (SVMs) for video affective content analysis." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Most existing work focus on emotion understanding from video focus on classification. As emotional content are sparsely expressed in user-generated video, the task of identifying emotional segments in the video @cite_1 @cite_35 @cite_54 may provide assistance to the classification task. Noting the synergy between the two tasks, in this paper we propose a multi-task neural network that tackles both tasks simultaneously.
{ "cite_N": [ "@cite_35", "@cite_54", "@cite_1" ], "mid": [ "2414501075", "2177696193", "2098287351" ], "abstract": [ "Despite growing research interest, emotion understanding for user-generated videos remains a challenging problem. Major obstacles include the diversity and complexity of video content, as well as the sparsity of expressed emotions. For the first time, we systematically study large-scale video emotion recognition by transferring deep feature encodings. In addition to the traditional, supervised recognition, we study the problem of zero-shot emotion recognition, where emotions in the test set are unseen during training. To cope with this task, we utilize knowledge transferred from auxiliary image and text corpora. A novel auxiliary Image Transfer Encoding (ITE) process is proposed to efficiently encode and generate video representation. We also thoroughly investigate different configurations of convolutional neural networks. Comprehensive experiments on multiple datasets demonstrate the effectiveness of our framework.", "Emotion is a key element in user-generated video. However, it is difficult to understand emotions conveyed in such videos due to the complex and unstructured nature of user-generated content and the sparsity of video frames expressing emotion. In this paper, for the first time, we propose a technique for transferring knowledge from heterogeneous external sources, including image and textual data, to facilitate three related tasks in understanding video emotion: emotion recognition, emotion attribution and emotion-oriented summarization. Specifically, our framework (1) learns a video encoding from an auxiliary emotional image dataset in order to improve supervised video emotion recognition, and (2) transfers knowledge from an auxiliary textual corpora for zero-shot recognition of emotion classes unseen during training. The proposed technique for knowledge transfer facilitates novel applications of emotion attribution and emotion-oriented summarization. A comprehensive set of experiments on multiple datasets demonstrate the effectiveness of our framework.", "In this paper, we offer an entirely new view to the problem of high level video parsing. We developed a novel computation method for affective level video segmentation. Its function was to extract emotional segments from videos. Its design was based on the pleasure-arousal-dominance (P-A-D) model of affect representation , which in principle can represent a large number of emotions. Our method had two stages. The first P-A-D estimation stage was defined within framework of the dynamic Bayesian networks (DBNs). A spectral clustering algorithm was applied in the final stage to determine the emotional segments of the video. The performance of our method was compared with the time adaptive clustering (TAC) algorithm and an accelerated version of it which we had developed. According to Vendrig , the TAC algorithm was the best segmentation method. Experiment results will show the feasibility of our method." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Video summarization has been studied for more than two decades @cite_3 and a detailed review is beyond the scope of this paper. In broad strokes, we can categorize summarization approaches into two major approaches: keyframes extraction and video skims. A large varienty of video features have been exploited, including visual saliency @cite_4 , motion cues @cite_28 , mid-level features @cite_0 @cite_18 , and semantic recognition @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_3", "@cite_0", "@cite_10" ], "mid": [ "2092825074", "2006180404", "2061833242", "2094998392", "2095536970", "2014464472" ], "abstract": [ "Rushes footages are considered as cheap gold mine with the potential for reuse in broadcasting and filmmaking industries. However, mining “gold” from unedited videos such as rushes is challenging as the reusable segments are buried in a large set of redundant information. In this paper, we propose a unified framework for stock footage classification and summarization to support video editors in navigating and organizing rushes videos. Our approach is composed of two steps. First, we employ motion features to filter the undesired camera motion and locate the stock footage. A hierarchical hidden Markov model (HHMM) is proposed to model the motion feature distribution and classify video segments into different categories to decide their potential for reuse. Second, we generate a short video summary to facilitate quick browsing of the stock footages by including the objects and events that are important for storytelling. For objects, we detect the presence of persons and moving objects. For events, we extract a set of features to detect and describe visual (motion activities and scene changes) and audio events (speech clips). A representability measure is then proposed to select the most representative video clips for video summarization. Our experiments show that the proposed HHMM significantly outperforms other methods based on SVM, FSM, and HMM. The automatically generated rushes summaries are also demonstrated to be easy-to-understand, containing little redundancy, and capable of including ground-truth objects and events with shorter durations and relatively pleasant rhythm based on the TRECVID 2007, 2008, and our subjective evaluations.", "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "Key frame extraction is an important technique in video summarization, browsing, searching and understanding. In this paper, we propose a novel approach to extract the most attractive key frames by using a saliency-based visual attention model that bridges the gap between semantic interpretation of the video and low-level features. First, dynamic and static conspicuity maps are constructed based on motion, color and texture features. Then, by introducing suppression factor and motion priority schemes, the conspicuity maps are fused into a saliency map that includes only true attention regions to produce attention curve. Finally, after time-constraint cluster algorithm grouping frames with similar content, the frames with maximum saliency value are selected as key-frames. Experimental results demonstrate the effectiveness of our approach for video summarization by retrieving the meaningful key frames.", "The demand for various multimedia applications is rapidly increasing due to the recent advance in the computing and network infrastructure, together with the widespread use of digital video technology. Among the key elements for the success of these applications is how to effectively and efficiently manage and store a huge amount of audio visual information, while at the same time providing user-friendly access to the stored data. This has fueled a quickly evolving research area known as video abstraction. As the name implies, video abstraction is a mechanism for generating a short summary of a video, which can either be a sequence of stationary images (keyframes) or moving images (video skims). In terms of browsing and navigation, a good video abstract will enable the user to gain maximum information about the target video sequence in a specified time constraint or sufficient information in the minimum time. Over past years, various ideas and techniques have been proposed towards the effective abstraction of video contents. The purpose of this article is to provide a systematic classification of these works. We identify and detail, for each approach, the underlying components and how they are addressed in specific works.", "With the explosive growth of web videos on the Internet, it becomes challenging to efficiently browse hundreds or even thousands of videos. When searching an event query, users are often bewildered by the vast quantity of web videos returned by search engines. Exploring such results will be time consuming and it will also degrade user experience. In this paper, we present an approach for event driven web video summarization by tag localization and key-shot mining. We first localize the tags that are associated with each video into its shots. Then, we estimate the relevance of the shots with respect to the event query by matching the shot-level tags with the query. After that, we identify a set of key-shots from the shots that have high relevance scores by exploring the repeated occurrence characteristic of key sub-events. Following the scheme in [6] and [22], we provide two types of summaries, i.e., threaded video skimming and visual-textual storyboard. Experiments are conducted on a corpus that contains 60 queries and more than 10 000 web videos. The evaluation demonstrates the effectiveness of the proposed approach.", "User-generated contents play an important role in the Internet video-sharing activities. Techniques for summarizing the user-generated videos (UGVs) into short representative clips are useful in many applications. This paper introduces an approach for UGV summarization based on semantic recognition. Different from other types of videos like movies or broadcasting news, where the semantic contents may vary greatly across different shots, most UGVs have only a single long shot with relatively consistent high-level semantics. Therefore, a few semantically representative segments are generally sufficient for a UGV summary, which can be selected based on the distribution of semantic recognition scores. In addition, due to the poor shooting quality of many UGVs, factors such as camera shaking and lighting condition are also considered to achieve more pleasant summaries. Experiments on over 100 UGVs with both subjective and objective evaluations show that our approach clearly outperforms several alternative methods and is highly efficient. Using a regular laptop, it can produce a summary for a 2-minute video in just 10 seconds." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
Recently, we @cite_54 introduced the task of emotion-oriented summarization which points at finishing video summarization task according to video emotion content. Inspired by the task of semantic attribution in text analysis, the task of emotion attribution @cite_54 are defined as attributing the video's overall emotion to its individual segments. However, @cite_54 still processed the emotion recognition, summarization and attribution tasks separately. Intrinsically, the emotion recognition can remarkably benefit from emotion attribution and emotion-oriented summarization; and the results of emotion attribution can provide more information for emotion-oriented summary. Thus, our framework is designed to solve these three tasks simultaneously and mutually by introducing Spatial transform networks.
{ "cite_N": [ "@cite_54" ], "mid": [ "2177696193" ], "abstract": [ "Emotion is a key element in user-generated video. However, it is difficult to understand emotions conveyed in such videos due to the complex and unstructured nature of user-generated content and the sparsity of video frames expressing emotion. In this paper, for the first time, we propose a technique for transferring knowledge from heterogeneous external sources, including image and textual data, to facilitate three related tasks in understanding video emotion: emotion recognition, emotion attribution and emotion-oriented summarization. Specifically, our framework (1) learns a video encoding from an auxiliary emotional image dataset in order to improve supervised video emotion recognition, and (2) transfers knowledge from an auxiliary textual corpora for zero-shot recognition of emotion classes unseen during training. The proposed technique for knowledge transfer facilitates novel applications of emotion attribution and emotion-oriented summarization. A comprehensive set of experiments on multiple datasets demonstrate the effectiveness of our framework." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
In the previous work @cite_41 , we only focused on the segment of high emotional value while neglected other frames which may contain content information. However, in this paper, the emotion segment and the entire content information will be combined with different emphasis.
{ "cite_N": [ "@cite_41" ], "mid": [ "2617085328" ], "abstract": [ "Emotional content is a key ingredient in user-generated videos. However, due to the emotion sparsely expressed in the user-generated video, it is very difficult to analayze emotions in videos. In this paper, we propose a new architecture--Frame-Transformer Emotion Classification Network (FT-EC-net) to solve three highly correlated emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization. We also contribute a new dataset for emotion attribution task by annotating the ground-truth labels of attribution segments. A comprehensive set of experiments on two datasets demonstrate the effectiveness of our framework." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
The proposed technique in this paper is inspired partially by the spatial transform network (ST-net) @cite_52 , which is firstly proposed for image (or feature map) classification. ST-net provides the capability for spatial transformation, which helps various tasks such as co-localization @cite_63 and spatial attention @cite_5 . It is fully-differentiable and it can transform an image or a feature map with little time loss as an insert framework.
{ "cite_N": [ "@cite_5", "@cite_52", "@cite_63" ], "mid": [ "2950178297", "603908379", "" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "" ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
So far there are various variants and improvement of ST-net. @cite_33 adapted it for end-to-end facial learning framework and proposed a loss function for solving the problem that ST-net might lead to its output patch beyond the input boundaries. @cite_36 improved upon ST-net by theoretically connecting it to the inverse compositional LK algorithm which exhibits better performance than original ST-net in various tasks.
{ "cite_N": [ "@cite_36", "@cite_33" ], "mid": [ "2562066862", "2499554887" ], "abstract": [ "In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity, in particular, we show superior performance in pure image alignment tasks as well as joint alignment classification problems on real-world problems.", "We propose an end-to-end deep convolutional network to simultaneously localize and rank relative visual attributes, given only weakly-supervised pairwise image comparisons. Unlike previous methods, our network jointly learns the attribute’s features, localization, and ranker. The localization module of our network discovers the most informative image region for the attribute, which is then used by the ranking module to learn a ranking model of the attribute. Our end-to-end framework also significantly speeds up processing and is much faster than previous methods. We show state-of-the-art ranking results on various relative attribute datasets, and our qualitative localization results clearly demonstrate our network’s ability to learn meaningful image patches." ] }
1812.09041
2906177770
Emotional content is a crucial ingredient in user-generated videos. However, the sparsely expressed emotions in the user-generated video cause difficulties to emotions analysis in videos. In this paper, we propose a new neural approach---Bi-stream Emotion Attribution-Classification Network (BEAC-Net) to solve three related emotion analysis tasks: emotion recognition, emotion attribution and emotion-oriented summarization, in an integrated framework. BEAC-Net has two major constituents, an attribution network and a classification network. The attribution network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity problem. The classification network utilizes both the extracted segment and the original video in a bi-stream architecture. We contribute a new dataset for the emotion attribution task with human-annotated ground-truth labels for emotion segments. Experiments on two video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classification streams.
BEAC-Net contains a two-stream architecture that extract features not only from the video segment identified by the attribution network, but also the entire video as its context. This is different from the two-stream architecture introduced by @cite_59 , which contains a convolutional stream to process pixels of the frames and another convolutional stream for optical flow features. @cite_50 further generalize this approach to 3D convolutions. By leveraging local motion information from optical flow, these approaches are effective at activity recognition. Optical flow features are not used in this paper, though we hypothesize they could lead to further improvements.
{ "cite_N": [ "@cite_50", "@cite_59" ], "mid": [ "2963524571", "2156303437" ], "abstract": [ "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification." ] }
1812.08901
2906262207
The acceptance of autonomous vehicles is dependent on the rigorous assessment of their safety. Furthermore, the commercial viability of AV programs depends on the ability to estimate the time and resources required to achieve desired safety levels. Naive approaches to estimating the reliability and safety levels of autonomous vehicles under development are will require infeasible amounts of testing of a static vehicle configuration. To permit both the estimation of current safety, and make predictions about the reliability of future systems, I propose the use of a standard tool for modelling the reliability of evolving software systems, software reliability growth models (SRGMs). Publicly available data from Californian public-road testing of two autonomous vehicle systems is modelled using two of the best-known SRGMs. The ability of the models to accurately estimate current reliability, as well as for current testing data to predict reliability in the future after additional testing, is evaluated. One of the models, the Musa-Okumoto model, appears to be a good estimator and a reasonable predictor.
In this context, we should also consider the work of Huang al @cite_24 , who observe that testing driverless cars by reflecting real-world driving conditions is not a particularly efficient way to improve or measure reliability. Their approach is to consider particular driving tasks (in their example, freeway lane changes), build a statistical model of the key parameters governing the variance in lane changes, skew testing (both physical and simulation) heavily to those areas of the parameter space where failures are likely to occur, and then use the statistical model in reverse to estimate real-world failure rates. SRGMs could, in theory, be used as part of such an approach, as a way to track the decrease in failure rates in specific tasks. It may also be considered desirable to conduct a parallel on-road testing replicating normal driving to validate any Huang-style statistical model - and using an SRGM on this data would still be useful as a way to provide ongoing estimates of vehicle failure rates, to compare with predictions.
{ "cite_N": [ "@cite_24" ], "mid": [ "2950211306" ], "abstract": [ "The process to certify highly Automated Vehicles has not yet been defined by any country in the world. Currently, companies test Automated Vehicles on public roads, which is time-consuming and inefficient. We proposed the Accelerated Evaluation concept, which uses a modified statistics of the surrounding vehicles and the Importance Sampling theory to reduce the evaluation time by several orders of magnitude, while ensuring the evaluation results are statistically accurate. In this paper, we further improve the accelerated evaluation concept by using Piecewise Mixture Distribution models, instead of Single Parametric Distribution models. We developed and applied this idea to forward collision control system reacting to vehicles making cut-in lane changes. The behavior of the cut-in vehicles was modeled based on more than 403,581 lane changes collected by the University of Michigan Safety Pilot Model Deployment Program. Simulation results confirm that the accuracy and efficiency of the Piecewise Mixture Distribution method outperformed single parametric distribution methods in accuracy and efficiency, and accelerated the evaluation process by almost four orders of magnitude." ] }
1812.08901
2906262207
The acceptance of autonomous vehicles is dependent on the rigorous assessment of their safety. Furthermore, the commercial viability of AV programs depends on the ability to estimate the time and resources required to achieve desired safety levels. Naive approaches to estimating the reliability and safety levels of autonomous vehicles under development are will require infeasible amounts of testing of a static vehicle configuration. To permit both the estimation of current safety, and make predictions about the reliability of future systems, I propose the use of a standard tool for modelling the reliability of evolving software systems, software reliability growth models (SRGMs). Publicly available data from Californian public-road testing of two autonomous vehicle systems is modelled using two of the best-known SRGMs. The ability of the models to accurately estimate current reliability, as well as for current testing data to predict reliability in the future after additional testing, is evaluated. One of the models, the Musa-Okumoto model, appears to be a good estimator and a reasonable predictor.
@cite_4 examined accident data for the Waymo program through to 2017, and found a simple linear relationship between kilometres travelled and cumulative accidents. They therefore concluded, While accidents are an important metric, it does not follow that there have been no improvements in the function of AV systems. Their analysis does does not take into account that manual intervention by the human safety drivers is likely to have prevented a significant number of accidents, and those disengagements, as shown in this work, have become much rarer over time. Furthermore, it does not take into account the contribution to accidents of human drivers that an AV could not reasonably be expected to avoid. Favaro al also examined the circumstances of disengagement events in some detail.
{ "cite_N": [ "@cite_4" ], "mid": [ "2755893646" ], "abstract": [ "Autonomous Vehicle technology is quickly expanding its market and has found in Silicon Valley, California, a strong foothold for preliminary testing on public roads. In an effort to promote safety and transparency to consumers, the California Department of Motor Vehicles has mandated that reports of accidents involving autonomous vehicles be drafted and made available to the public. The present work shows an in-depth analysis of the accident reports filed by different manufacturers that are testing autonomous vehicles in California (testing data from September 2014 to March 2017). The data provides important information on autonomous vehicles accidents’ dynamics, related to the most frequent types of collisions and impacts, accident frequencies, and other contributing factors. The study also explores important implications related to future testing and validation of semi-autonomous vehicles, tracing the investigation back to current literature as well as to the current regulatory panorama." ] }
1812.08839
2906579741
The ability to build a model on a source task and subsequently adapt such model on a new target task is a pervasive need in many astronomical applications. The problem is generally known as transfer learning in machine learning, where domain adaptation is a popular scenario. An example is to build a predictive model on spectroscopic data to identify Supernovae IA, while subsequently trying to adapt such model on photometric data. In this paper we propose a new general approach to domain adaptation that does not rely on the proximity of source and target distributions. Instead we simply assume a strong similarity in model complexity across domains, and use active learning to mitigate the dependency on source examples. Our work leads to a new formulation for the likelihood as a function of empirical error using a theoretical learning bound; the result is a novel mapping from generalization error to a likelihood estimation. Results using two real astronomical problems, Supernova Ia classification and identification of Mars landforms, show two main advantages with our approach: increased accuracy performance and substantial savings in computational cost.
Domain adaptation induces a model by exploiting experience gathered from previous tasks @cite_2 . It is considered a subfield of transfer learning @cite_3 , and has become increasingly popular in recent years due to the pervasive nature of task domains exhibiting differences in sample distribution @cite_35 @cite_7 . The central question is if a previously constructed (source) model can be adapted to a new task, or if it is better to build a new (target) model from scratch.
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_3", "@cite_2" ], "mid": [ "2403788517", "2951103356", "2165698076", "2104094955" ], "abstract": [ "Domain adaptation aims at learning robust classifiers across domains using labeled data from a source domain. Representation learning methods, which project the original features to a new feature space, have been proved to be quite effective for this task. However, these unsupervised methods neglect the domain information of the input and are not specialized for the classification task. In this work, we address two key factors to guide the representation learning process for domain adaptation of sentiment classification -- one is domain supervision, enforcing the learned representation to better predict the domain of an input, and the other is sentiment supervision which utilizes the source domain sentiment labels to learn sentiment-favorable representations. Experimental results show that these two factors significantly improve the proposed models as expected.", "A key topic in classification is the accuracy loss produced when the data distribution in the training (source) domain differs from that in the testing (target) domain. This is being recognized as a very relevant problem for many computer vision tasks such as image classification, object detection, and object category recognition. In this paper, we present a novel domain adaptation method that leverages multiple target domains (or sub-domains) in a hierarchical adaptation tree. The core idea is to exploit the commonalities and differences of the jointly considered target domains. Given the relevance of structural SVM (SSVM) classifiers, we apply our idea to the adaptive SSVM (A-SSVM), which only requires the target domain samples together with the existing source-domain classifier for performing the desired adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM). As proof of concept we use HA-SSVM for pedestrian detection and object category recognition. In the former we apply HA-SSVM to the deformable part-based model (DPM) while in the latter HA-SSVM is applied to multi-category classifiers. In both cases, we show how HA-SSVM is effective in increasing the detection recognition accuracy with respect to adaptation strategies that ignore the structure of the target data. Since, the sub-domains of the target data are not always known a priori, we shown how HA-SSVM can incorporate sub-domain structure discovery for object category recognition.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors." ] }
1812.08839
2906579741
The ability to build a model on a source task and subsequently adapt such model on a new target task is a pervasive need in many astronomical applications. The problem is generally known as transfer learning in machine learning, where domain adaptation is a popular scenario. An example is to build a predictive model on spectroscopic data to identify Supernovae IA, while subsequently trying to adapt such model on photometric data. In this paper we propose a new general approach to domain adaptation that does not rely on the proximity of source and target distributions. Instead we simply assume a strong similarity in model complexity across domains, and use active learning to mitigate the dependency on source examples. Our work leads to a new formulation for the likelihood as a function of empirical error using a theoretical learning bound; the result is a novel mapping from generalization error to a likelihood estimation. Results using two real astronomical problems, Supernova Ia classification and identification of Mars landforms, show two main advantages with our approach: increased accuracy performance and substantial savings in computational cost.
Feature-based domain adaptation methods attempt to project source and target datasets into a latent feature space, where the covariate-shift assumption holds. A model is then built on the transformed space, and used as the classifier on the target. Examples are structural corresponding learning @cite_4 , subspace alignment methods @cite_53 , among others @cite_0 @cite_56 @cite_43 .
{ "cite_N": [ "@cite_4", "@cite_53", "@cite_56", "@cite_0", "@cite_43" ], "mid": [ "2158108973", "2104068492", "22861983", "2130903752", "2159570078" ], "abstract": [ "Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP tasks, however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cases, we seek to adapt existing models from a resource-rich source domain to a resource-poor target domain. We introduce structural correspondence learning to automatically induce correspondences among features from different domains. We test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data, as well as improvements in target domain parsing accuracy using our improved tagger.", "In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.", "This paper addresses pattern classification in the framework of domain adaptation by considering methods that solve problems in which training data are assumed to be available only for a source domain different (even if related) from the target domain of (unlabeled) test data. Two main novel contributions are proposed: 1) a domain adaptation support vector machine (DASVM) technique which extends the formulation of support vector machines (SVMs) to the domain adaptation framework and 2) a circular indirect accuracy assessment strategy for validating the learning of domain adaptation classifiers when no true labels for the target--domain instances are available. Experimental results, obtained on a series of two-dimensional toy problems and on two real data sets related to brain computer interface and remote sensing applications, confirmed the effectiveness and the reliability of both the DASVM technique and the proposed circular validation strategy." ] }
1812.08839
2906579741
The ability to build a model on a source task and subsequently adapt such model on a new target task is a pervasive need in many astronomical applications. The problem is generally known as transfer learning in machine learning, where domain adaptation is a popular scenario. An example is to build a predictive model on spectroscopic data to identify Supernovae IA, while subsequently trying to adapt such model on photometric data. In this paper we propose a new general approach to domain adaptation that does not rely on the proximity of source and target distributions. Instead we simply assume a strong similarity in model complexity across domains, and use active learning to mitigate the dependency on source examples. Our work leads to a new formulation for the likelihood as a function of empirical error using a theoretical learning bound; the result is a novel mapping from generalization error to a likelihood estimation. Results using two real astronomical problems, Supernova Ia classification and identification of Mars landforms, show two main advantages with our approach: increased accuracy performance and substantial savings in computational cost.
From a theoretical view, previous work has tried to estimate the distance between source and target distributions @cite_5 @cite_2 @cite_8 ; and employ regularization terms to find models with good generalization performance on both source and target domains @cite_55 .
{ "cite_N": [ "@cite_5", "@cite_55", "@cite_8", "@cite_2" ], "mid": [ "2131953535", "", "2110091014", "2104094955" ], "abstract": [ "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.", "", "Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which minimizing a non-uniform combination of source risks can achieve much lower target error than standard empirical risk minimization.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors." ] }
1812.08866
2906410737
To support Machine Type Communications (MTC) in next generation mobile networks, NarrowBand-IoT (NB-IoT) has been released by the Third Generation Partnership Project (3GPP) as a promising solution to provide extended coverage and low energy consumption for low cost MTC devices. However, the existing Orthogonal Multiple Access (OMA) scheme in NB-IoT cannot provide connectivity for a massive number of MTC devices. In parallel with the development of NB-IoT, Non-Orthogonal Multiple Access (NOMA), introduced for the fifth generation wireless networks, is deemed to significantly improve the network capacity by providing massive connectivity through sharing the same spectral resources. To leverage NOMA in the context of NB-IoT, we propose a power domain NOMA scheme with user clustering for an NB-IoT system. In particular, the MTC devices are assigned to different ranks within the NOMA clusters where they transmit over the same frequency resources. Then, we formulate an optimization problem to maximize the total throughput of the network by optimizing the resource allocation of MTC devices and NOMA clustering while satisfying the transmission power and quality of service requirements. We prove the NP-hardness of the proposed optimization problem. We further design an efficient heuristic algorithm to solve the proposed optimization problem by jointly optimizing NOMA clustering and resource allocation of MTC devices. Furthermore, we prove that the reduced optimization problem of power control is a convex optimization task. Simulation results are presented to demonstrate the efficiency of the proposed scheme.
Al-Imari @cite_23 proposed a NOMA scheme for uplink data transmission that allows multiple users to share the same sub-carrier without any coding spreading redundancy. Mostafa @cite_15 studied the connectivity maximization for the application of NOMA in NB-IoT, where only two users can share the same sub-carrier. Kiani and Ansari @cite_17 proposed an edge computing aware NOMA technique in which MEC users' uplink energy consumption is minimized via an optimization framework. Wu @cite_18 investigated the spectral efficiency maximization problem for wireless powered NOMA IoT networks. Shahini @cite_21 proposed the energy efficiency maximization problem for cognitive radio (CR) based IoT networks by taking into consideration of user buffer occupancy and data rate fairness. Qian @cite_2 proposed an optimal SIC ordering to minimize the maximum task execution latency across devices for MEC-aware NOMA NB-IoT network. Zhai @cite_16 proposed a joint user scheduling and power allocation for NOMA based wireless networks with massive IoT devices. Xu and Darwazeh @cite_4 proposed a compressed signal waveform solution, termed fast-orthogonal frequency division multiplexing (Fast-OFDM), to potentially double the number of connected devices.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2963738831", "2797309738", "", "1997958020", "2897268940", "2741981612", "", "2772709146" ], "abstract": [ "Wireless powered communication networks (WPCNs), where multiple energy-limited devices first harvest energy in the downlink and then transmit information in the uplink, have been envisioned as a promising solution for the future Internet-of-Things (IoT). Meanwhile, nonorthogonal multiple access (NOMA) has been proposed to improve the system spectral efficiency (SE) of the fifth-generation (5G) networks by allowing concurrent transmissions of multiple users in the same spectrum. As such, NOMA has been recently considered for the uplink of WPCNs based IoT networks with a massive number of devices. However, simultaneous transmissions in NOMA may also incur more transmit energy consumption as well as circuit energy consumption in practice which is critical for energy constrained IoT devices. As a result, compared to orthogonal multiple access schemes such as time-division multiple access (TDMA), whether the SE can be improved and or the total energy consumption can be reduced with NOMA in such a scenario still remains unknown. To answer this question, we first derive the optimal time allocations for maximizing the SE of a TDMA-based WPCN (T-WPCN) and a NOMA-based WPCN (N-WPCN), respectively. Subsequently, we analyze the total energy consumption as well as the maximum SE achieved by these two networks. Surprisingly, it is found that N-WPCN not only consumes more energy, but also is less spectral efficient than T-WPCN. Simulation results verify our theoretical findings and unveil the fundamental performance bottleneck, i.e., “worst user bottleneck problem”, in multiuser NOMA systems.", "Narrowband Internet of Things (NB-IoT) is a low power wide area network (LPWAN) technique introduced in 3GPP release 13. The narrowband transmission scheme enables high capacity, wide coverage, and low power consumption communications. With the increasing demand for services over the air, wireless spectrum is becoming scarce and new techniques are required to boost the number of connected devices within a limited spectral resource to meet the service requirements. This paper provides a compressed signal waveform solution, termed fast-orthogonal frequency division multiplexing (Fast-OFDM), to double potentially the number of connected devices by compressing occupied bandwidth of each device without compromising data rate and bit error rate performance. Simulation is first evaluated for the Fast-OFDM with comparisons to single-carrier-frequency division multiple access (SC-FDMA). Results indicate the same performance for both systems in additive white Gaussian noise channel. Experimental measurements are also presented to show the bandwidth saving benefits of Fast-OFDM. It is shown that in a line-of-sight scenario, Fast-OFDM has similar performance as SC-FDMA but with 50 bandwidth saving. This research paves the way for extended coverage, enhanced capacity and improved data rate of NB-IoT in fifth generation new radio networks.", "", "Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by coding spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.", "Nonorthogonal multiple access (NOMA) and mobile edge computing (MEC) have been emerging as promising techniques in narrowband Internet of Things (NB-IoT) systems to provide ubiquitously connected IoT devices with efficient transmission and computation. However, the successive interference cancellation (SIC) ordering of NOMA has become the bottleneck limiting the performance improvement for the uplink transmission, which is the dominant traffic flow of NB-IoT communications. Also, in order to guarantee the fairness of task execution latency across NB-IoT devices, the computation resource of MEC units has to be fairly allocated to tasks from IoT devices according to the task size. For these reasons, we investigate the joint optimization of SIC ordering and computation resource allocation in this paper. Specifically, we formulate a combinatorial optimization problem with the objective to minimize the maximum task execution latency required per task bit across NB-IoT devices under the limitation of computation resource. We prove the NP-hardness of this joint optimization problem. To tackle this challenging problem, we first propose an optimal algorithm to obtain the optimal SIC ordering and computation resource allocation in two stages: the convex computation resource allocation optimization followed by the combinatorial SIC ordering optimization. To reduce the computational complexity, we design an efficient heuristic algorithm for the SIC ordering optimization. As a good feature, the proposed low-complexity algorithm suffers a negligible performance degradation in comparison with the optimal algorithm. Simulation results demonstrate the benefits of NOMA in reducing the task execution latency.", "Narrowband Internet of Things (NB-IoT) is a recently standardized technology to support machine-type communications (MTC) in Long Term Evolution-Advanced (LTE-A) Pro networks. NB-IoT can enable energy-efficient communication with extended coverage on a narrow bandwidth of 180 kHz for low-cost MTC devices (MTCDs). The main challenge of supporting MTC in LTE-A Pro networks is to provide connectivity to a massive number of MTCDs. To overcome this challenge, in this paper, we propose a power-domain uplink non-orthogonal multiple access (NOMA) scheme for NB-IoT systems. By allowing multiple MTCDs to share the same sub-carrier, NOMA can provide connectivity to more MTCDs than orthogonal multiple access (OMA). We formulate a joint sub-carrier and transmission power allocation problem to maximize the number of MTCDs satisfying the quality of service (QoS) and transmission power requirements. We decompose the problem into two sub-problems and propose algorithms to solve them. Simulation results show that our proposed NOMA scheme can significantly increase the number of successfully connected MTCDs in NB-IoT systems compared to OMA.", "", "With the fast development of Internet of Things (IoT), the fifth generation (5G) wireless networks need to provide massive connectivity of IoT devices and meet the demand for low latency. To satisfy these requirements, nonorthogonal multiple access (NOMA) has been recognized as a promising solution for 5G networks to significantly improve the network capacity. In parallel with the development of NOMA techniques, mobile edge computing (MEC) is becoming one of the key emerging technologies to reduce the latency and improve the quality of service (QoS) for 5G networks. In order to capture the potential gains of NOMA in the context of MEC, this paper proposes an edge computing aware NOMA technique which can enjoy the benefits of uplink NOMA in reducing MEC users’ uplink energy consumption. To this end, we formulate an NOMA-based optimization framework which minimizes the energy consumption of MEC users via optimizing the user clustering, computing and communication resource allocation, and transmit powers. In particular, similar to frequency resource blocks (RBs), we divide the computing capacity available at the cloudlet to computing RBs. Accordingly, we explore the joint allocation of the frequency and computing RBs to the users that are assigned to different order indices within the NOMA clusters. We also design an efficient heuristic algorithm for user clustering and RBs allocation, and formulate a convex optimization problem for the power control to be solved independently per NOMA cluster. The performance of the proposed NOMA scheme is evaluated via simulations." ] }
1812.08972
2905752951
There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing among similar nodes. COSINE applies graph partitioning algorithms to networks and builds parameter sharing dependency of nodes based on the result of partitioning. With parameters sharing among similar nodes, COSINE injects prior knowledge about higher structural information into training process which makes network embedding more efficient and effective. COSINE can be applied to any embedding lookup method and learn high-quality embeddings with limited memory and shorter training time. We conduct experiments of multi-label classification and link prediction, where baselines and our model have the same memory usage. Experimental results show that COSINE gives baselines up to 23 increase on classification and up to 25 increase on link prediction. Moreover, time of all representation learning methods using COSINE decreases from 30 to 70 .
Vertex classification is one of the most common semi-supervised tasks in network analysis, which aims to classify the vertices to at least one groups. The application of the task could be shown in many areas, such as protein classification @cite_25 , user profiling @cite_43 @cite_48 , and so on.
{ "cite_N": [ "@cite_43", "@cite_25", "@cite_48" ], "mid": [ "2107961038", "2962756421", "" ], "abstract": [ "User attributes, such as occupation, education, and location, are important for many applications. In this paper, we study the problem of profiling user attributes in social network. To capture the correlation between attributes and social connections, we present a new insight that social connections are discriminatively correlated with attributes via a hidden factor -- relationship type. For example, a user's colleagues are more likely to share the same employer with him than other friends. Based on the insight, we propose to co-profile users' attributes and relationship types of their connections. To achieve co-profiling, we develop an efficient algorithm based on an optimization framework. Our algorithm captures our insight effectively. It iteratively profiles attributes by propagation via certain types of connections, and profiles types of connections based on attributes and the network structure. We conduct extensive experiments to evaluate our algorithm. The results show that our algorithm profiles various attributes accurately, which improves the state-of-the-art methods by 12 .", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "" ] }
1812.08972
2905752951
There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing among similar nodes. COSINE applies graph partitioning algorithms to networks and builds parameter sharing dependency of nodes based on the result of partitioning. With parameters sharing among similar nodes, COSINE injects prior knowledge about higher structural information into training process which makes network embedding more efficient and effective. COSINE can be applied to any embedding lookup method and learn high-quality embeddings with limited memory and shorter training time. We conduct experiments of multi-label classification and link prediction, where baselines and our model have the same memory usage. Experimental results show that COSINE gives baselines up to 23 increase on classification and up to 25 increase on link prediction. Moreover, time of all representation learning methods using COSINE decreases from 30 to 70 .
Most current node embedding techniques are lookup algorithms, i.e., there is a matrix containing the embedding vectors for all nodes, so we just need to look up in the matrix for a specific embedding. Early works in NRL mainly are based on the factorization of the graph Laplacian matrix, such as Isomap @cite_23 , Laplacian Eigenmaps @cite_6 and Social Dimension @cite_58 . However, the computational expense of those approaches is so high that they could not be adapted to large-scale networks. Inspired by word embedding methods in Natural Language Processing, DeepWalk @cite_3 and node2vec @cite_25 combine word2vec @cite_21 with different random walk strategies. @cite_63 design a model called LINE, which leverages first- and second-order proximities between two vertices. Furthermore, @cite_62 extend the aforementioned model with neuron networks to learn non-linear features. Besides the look-up algorithms, Graph Convolutional Networks (GCN) @cite_20 and GraphSAGE @cite_17 are paradigms of neighborhood aggregation algorithms. They generate node embeddings with information aggregated from a node’s local neighborhood and some shared parameters.
{ "cite_N": [ "@cite_62", "@cite_21", "@cite_3", "@cite_6", "@cite_23", "@cite_63", "@cite_58", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2393319904", "", "2154851992", "2173649752", "", "1888005072", "2046253692", "2962756421", "2519887557", "2962767366" ], "abstract": [ "Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization.", "", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "", "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .", "Social media such as blogs, Facebook, Flickr, etc., presents data in a network format rather than classical IID distribution. To address the interdependency among data instances, relational learning has been proposed, and collective inference based on network connectivity is adopted for prediction. However, connections in social media are often multi-dimensional. An actor can connect to another actor for different reasons, e.g., alumni, colleagues, living in the same city, sharing similar interests, etc. Collective inference normally does not differentiate these connections. In this work, we propose to extract latent social dimensions based on network information, and then utilize them as features for discriminative learning. These social dimensions describe diverse affiliations of actors hidden in the network, and the discriminative learning can automatically determine which affiliations are better aligned with the class labels. Such a scheme is preferred when multiple diverse relations are associated with the same network. We conduct extensive experiments on social media data (one from a real-world blog site and the other from a popular content sharing site). Our model outperforms representative relational learning methods based on collective inference, especially when few labeled data are available. The sensitivity of this model and its connection to existing methods are also examined.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions." ] }
1812.08972
2905752951
There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing among similar nodes. COSINE applies graph partitioning algorithms to networks and builds parameter sharing dependency of nodes based on the result of partitioning. With parameters sharing among similar nodes, COSINE injects prior knowledge about higher structural information into training process which makes network embedding more efficient and effective. COSINE can be applied to any embedding lookup method and learn high-quality embeddings with limited memory and shorter training time. We conduct experiments of multi-label classification and link prediction, where baselines and our model have the same memory usage. Experimental results show that COSINE gives baselines up to 23 increase on classification and up to 25 increase on link prediction. Moreover, time of all representation learning methods using COSINE decreases from 30 to 70 .
Closely related to our model, HARP @cite_12 first coarsens the graph, and after that, the new graph consists of supernodes. Afterward, network embedding methods are applied to learn the representations of supernodes, and then with the learned representation as the initial value of the supernodes' constituent nodes, the embedding methods are run over finer-grained subgraphs again. Compared with HARP, MILE @cite_8 implements embeddings refinement to learn better representations for nodes in finer-grained networks with lower computational cost and higher flexibility. While HARP and MILE still follow the setting of embedding lookup as previous work did, our framework manages to reduce the memory usage as well as improve the scalability.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2700550412", "2788760796" ], "abstract": [ "We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP's hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on both classification tasks on real-world graphs such as DBLP, BlogCatalog, CiteSeer, and Arxiv, where we achieve a performance gain over the original implementations by up to 14 Macro F1.", "Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework -- a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a novel graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while also often generating embeddings of better quality for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation." ] }
1812.08972
2905752951
There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing among similar nodes. COSINE applies graph partitioning algorithms to networks and builds parameter sharing dependency of nodes based on the result of partitioning. With parameters sharing among similar nodes, COSINE injects prior knowledge about higher structural information into training process which makes network embedding more efficient and effective. COSINE can be applied to any embedding lookup method and learn high-quality embeddings with limited memory and shorter training time. We conduct experiments of multi-label classification and link prediction, where baselines and our model have the same memory usage. Experimental results show that COSINE gives baselines up to 23 increase on classification and up to 25 increase on link prediction. Moreover, time of all representation learning methods using COSINE decreases from 30 to 70 .
Compression for Convolutional Neural Networks (CNN) has been extensively studied, mainly divided into three following branches. First, Low-rank matrix tensor factorization @cite_13 @cite_35 @cite_9 is derived on the assumption that using a low-rank approximation of the matrix to approximate each of the networks' weight matrices. Second, network pruning @cite_36 @cite_39 @cite_30 @cite_31 removes trivial weights in the neural network to make the network sparse. Third, network quantization reduces the number of bits required to represent each weight, such as HashedNet @cite_59 and QNN @cite_41 . There are also several techniques to compress word embeddings. Character-based neural language models @cite_19 @cite_54 reduce the number of unique word types, but are faced with the problem that Eastern Asian languages such as Chinese and Japanese have a large vocabulary. Kept out of the problem, @cite_11 adopts various methods involving pruning and deep compositional coding to construct the embeddings with few basis vectors. Besides, Word2Bits @cite_44 extends word2vec @cite_21 with a quantization function, showing that training with the function acts as a regularizer.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_36", "@cite_41", "@cite_9", "@cite_54", "@cite_21", "@cite_39", "@cite_44", "@cite_19", "@cite_59", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "", "2950967261", "2119144962", "2950894517", "", "", "", "", "2794137799", "1938755728", "2952432176", "", "2058641082", "2766061613" ], "abstract": [ "", "The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves @math top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.", "", "", "", "", "Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.", "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.", "", "While Deep Neural Networks (DNNs) have achieved tremendous success for large vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is slow. One reason is that DNNs are trained with a large number of training parameters (i.e., 10-50 million). Because networks are trained with a large number of output targets to achieve good performance, the majority of these parameters are in the final weight layer. In this paper, we propose a low-rank matrix factorization of the final weight layer. We apply this low-rank technique to DNNs for both acoustic modeling and language modeling. We show on three different LVCSR tasks ranging between 50-400 hrs, that a low-rank factorization reduces the number of parameters of the network by 30-50 . This results in roughly an equivalent reduction in training time, without a significant loss in final recognition accuracy, compared to a full-rank representation.", "Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98 in a sentiment analysis task and 94 99 in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture." ] }
1812.08843
2949770501
In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network. In this work we propose a distributed decision-making algorithm. The agents are assumed to observe data that may be generated by different models. Through localized interactions, the agents reach agreement about which model to track and interact with each other in order to enhance the network performance. We investigate the approach for both static and mobile networks. The simulations illustrate the performance of the proposed strategies.
Bio-inspired systems are designed to mimic the behavior of some animal groups such as bee swarms, birds flying in formation, and schools of fish @cite_11 @cite_0 @cite_17 @cite_10 @cite_9 @cite_16 . Diffusion strategies can be used to model some of these coordinated types of behavior, as well as solve inference and estimation tasks in a distributed manner over networks @cite_5 @cite_1 . We may distinguish between two types of networks: single-task and multi-task networks. In single-task implementations @cite_5 @cite_1 , the networks consist of agents that are interested in the same objective and sense data that are generated by the same model. An analogy would be a school of fish tracking a food source: all elements in the fish school sense distance and direction to the same food source and are interested in approaching it. On the other hand, multi-task networks @cite_8 @cite_3 @cite_12 @cite_14 @cite_18 @cite_6 @cite_15 @cite_13 @cite_19 involve agents sensing data arising from different models and different clusters of agents may be interested in identifying separate models. A second analogy is a school of fish sensing information about multiple food sources.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_11", "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2046058958", "1482016817", "", "2538000501", "", "2148001991", "", "", "2769267838", "2111714602", "", "2042664989", "", "", "1964788963", "2159585070", "2313175683" ], "abstract": [ "We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and non-Bayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. non-Bayesian), the sources of information (e.g., observation vs. communication), and the structure of social networks in which individuals are situated on three key questions: (1) whether social learning will lead to consensus, i.e., to agreement among individuals starting with different views; (2) whether social learning will effectively aggregate dispersed information and thus weed out incorrect beliefs; (3) whether media sources, prominent agents, politicians and the state will be able to manipulate beliefs and spread misinformation in a society.", "We examine the design of self-organizing mobile adaptive networks with multiple targets in which the network nodes form distinct clusters to learn about and purse multiple targets, all while moving in a cohesive collision-free manner. We build upon previous distributed diffusion-based adaptive learning networks that focused on a single target to examine the case with multiple targets in which the nodes do not know the number of targets, and exchange local information with their neighbors in their learning objectives. In particular, we design a method allowing the nodes to switch the target they are tracking thereby engendering the formation of distinct stable learning groups that can split up and purse their distinct targets over time. We provide analytical mean stability and steady state mean-square deviation results along with simulations that demonstrate the efficacy of the proposed method.", "", "There arises the need in many wireless network applications to infer and track different models of interest. Some nodes in the network are informed, where they observe the different models and send information to the uninformed ones. Each uninformed node responds to one informed node and joins its group. In this work, we suggest an adaptive and distributed clustering and partitioning approach that allows the informed nodes in the network to be clustered into many groups according to the observed models; then we apply a decentralized strategy to part the uninformed nodes into groups of approximately equal size around the informed nodes.", "", "Bio-inspired networking techniques have been investigated since more than a decade. Findings in this field have fostered new developments in networking, especially in the most challenging domains such as handling large-scale networks, their dynamic nature, resource constraints, heterogeneity, unattended operation, and robustness. Even though this new research area started with highly theoretical concepts, it can be seen that there is also practical impact. This article aims to give an overview to the general field of bioinspired networking, introducing the key concepts and methodologies. Selected examples that outline the capabilities and the practical relevance are discussed in more detail. The presented examples outline the activities of a new community working on bio-inspired networking solutions, which is converging and becomes visible in term of the provided astonishingly efficient solutions.", "", "", "", "In this paper, we investigate the self-organization and cognitive abilities of adaptive networks when the individual agents are allowed to move in pursuit of a target. The nodes act as adaptive entities with localized processing and are able to respond to stimuli in real-time. We apply adaptive diffusion techniques to guide the self-organization process, including harmonious motion and collision avoidance. We also provide stability and mean-square performance analysis of the proposed strategies, together with computer simulation to illustrate results.", "", "Nature provides splendid examples of real-time learning and adaptation behavior that emerges from highly localized interactions among agents of limited capabilities. For example, schools of fish are remarkably apt at configuring their topologies almost instantly in the face of danger [1]: when a predator arrives, the entire school opens up to let the predator through and then coalesces again into a moving body to continue its schooling behavior. Likewise, in bee swarms, only a small fraction of the agents (about 5 ) are informed, and these informed agents are able to guide the entire swarm of bees to their new hive [2]. It is an extraordinary property of biological networks that sophisticated behavior is able to emerge from simple interactions among lower-level agents [3].", "", "", "Escherichia coli is a single‐celled organism that lives in your gut. It is equipped with a set of rotary motors only 45 nm in diameter. Each motor drives a long, thin, helical filament that extends several cell body lengths out into the external medium. The assemblage of motor and filament is called a flagellum. The concerted motion of several flagella enables a cell to swim. A cell can move toward regions that it deems more favorable by measuring changes in the concentrations of certain chemicals in its environment (mostly nutrients), deciding whether life is getting better or worse, and then modulating the direction of rotation of its flagella. Thus, in addition to rotary engines and propellers, E. coli's standard accessories include particle counters, rate meters, and gear boxes. This microorganism is a nanotechnologist's dream. I will discuss the features that make it so, from the perspectives of several scientific disciplines: anatomy, genetics, chemistry, and physics.", "Distributed processing over networks relies on in-network processing and cooperation among neighboring agents. Cooperation is beneficial when agents share a common objective. However, in many applications, agents may belong to different clusters that pursue different objectives. Then, indiscriminate cooperation will lead to undesired results. In this paper, we propose an adaptive clustering and learning scheme that allows agents to learn which neighbors they should cooperate with and which other neighbors they should ignore. In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks. We carry out a detailed mean-square analysis and assess the error probabilities of Types I and II, i.e., false alarm and misdetection, for the clustering mechanism. Among other results, we establish that these probabilities decay exponentially with the step-sizes so that the probability of correct clustering can be made arbitrarily close to one.", "A swarm of honey bees, Apis mellijera L., will fly to a new homesite only if accompanied by a queen whose presence the swarm perceives through her release of 9-oxo- trans -2-decenoic acid. The airborne swarm, together with its queen, is led by worker bees releasing Nassanoff pheromone." ] }
1812.08843
2949770501
In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network. In this work we propose a distributed decision-making algorithm. The agents are assumed to observe data that may be generated by different models. Through localized interactions, the agents reach agreement about which model to track and interact with each other in order to enhance the network performance. We investigate the approach for both static and mobile networks. The simulations illustrate the performance of the proposed strategies.
In the latter case, agents need to decide between the multiple objectives and reach agreement on following a single objective for the entire network. In the earlier works @cite_7 @cite_4 , a scenario was considered where agents were assumed to sense data arising from two models, and a diffusion strategy was developed to enable all agents to agree on estimating a single model. The algorithm developed in @cite_7 relied on binary labeling and is applicable only to situations involving two models. In this work, we propose an approach for more than two models.
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2577808232", "2112603423" ], "abstract": [ "In this paper, we study distributed decision-making over mobile adaptive networks where nodes in the network collect data generated by two different models. The nodes need to decide which model to estimate and track. However, they do not know beforehand which model they observe. Therefore, an effective clustering technique is needed. We apply a clustering technique that reduces the clustering error. Furthermore, introduce an additional term to the motion model to ensure that the nodes move coherently without fragmentation in the network during the decision-making process. Once the network reaches agreement on the desired model, the cooperation among nodes enhances the performance of the estimation task by relaying data throughout the network.", "In distributed processing, agents generally collect data generated by the same underlying unknown model (represented by a vector of parameters) and then solve an estimation or inference task cooperatively. In this paper, we consider the situation in which the data observed by the agents may have risen from two different models. Agents do not know beforehand which model accounts for their data and the data of their neighbors. The objective for the network is for all agents to reach agreement on which model to track and to estimate this model cooperatively. In these situations, where agents are subject to data from unknown different sources, conventional distributed estimation strategies would lead to biased estimates relative to any of the underlying models. We first show how to modify existing strategies to guarantee unbiasedness. We then develop a classification scheme for the agents to identify the models that generated the data, and propose a procedure by which the entire network can be made to converge towards the same model through a collaborative decision-making process. The resulting algorithm is applied to model fish foraging behavior in the presence of two food sources." ] }
1812.08843
2949770501
In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network. In this work we propose a distributed decision-making algorithm. The agents are assumed to observe data that may be generated by different models. Through localized interactions, the agents reach agreement about which model to track and interact with each other in order to enhance the network performance. We investigate the approach for both static and mobile networks. The simulations illustrate the performance of the proposed strategies.
We consider a distributed mean-square-error estimation problem over an @math -agent network. The connectivity of the agents is described by a graph (see Fig. ). Data sensed by any particular agent can arise from one of different models. The objective is to reach an agreement among all agents in the network on one common model to estimate. Two definitions are introduced: the observed model, which refers to the model from which an agent collects data, and the desired model, which refers to the model the agent decides to estimate. The agents do not know which model generated the data they collect; they also do not know which other agents in their neighborhood sense data arising from the same model. Therefore, each agent needs to determine the subset of its neighbors that observes the same model. This initial step is referred to as clustering . Since the decision-making objective depends on the clustering output, errors made during the clustering process have an impact on the global decision. In this work, we rely on the clustering technique proposed in @cite_2 to reduce this effect.
{ "cite_N": [ "@cite_2" ], "mid": [ "2545849206" ], "abstract": [ "We consider the problem of decentralized clustering and estimation over multitask networks, where agents infer and track different models of interest. The agents do not know beforehand which model is generating their own data. They also do not know which agents in their neighborhood belong to the same cluster. We propose a decentralized clustering algorithm aimed at identifying and forming clusters of agents of similar objectives, and at guiding cooperation to enhance the inference performance. One key feature of the proposed technique is the integration of the learning and clustering tasks into a single strategy. We analyze the performance of the procedure and show that the error probabilities of types I and II decay exponentially to zero with the step-size parameter. While links between agents following different objectives are ignored in the clustering process, we nevertheless show how to exploit these links to relay critical information across the network for enhanced performance. Simulation results illustrate the performance of the proposed method in comparison to other useful techniques." ] }
1907.03112
2955263739
Cross-lingual embeddings aim to represent words in multiple languages in a shared vector space by capturing semantic similarities across languages. They are a crucial component for scaling tasks to multiple languages by transferring knowledge from languages with rich resources to low-resource languages. A common approach to learning cross-lingual embeddings is to train monolingual embeddings separately for each language and learn a linear projection from the monolingual spaces into a shared space, where the mapping relies on a small seed dictionary. While there are high-quality generic seed dictionaries and pre-trained cross-lingual embeddings available for many language pairs, there is little research on how they perform on specialised tasks. In this paper, we investigate the best practices for constructing the seed dictionary for a specific domain. We evaluate the embeddings on the sequence labelling task of Curriculum Vitae parsing and show that the size of a bilingual dictionary, the frequency of the dictionary words in the domain corpora and the source of data (task-specific vs generic) influence the performance. We also show that the less training data is available in the low-resource language, the more the construction of the bilingual dictionary matters, and demonstrate that some of the choices are crucial in the zero-shot transfer learning case.
A common way to select a bilingual dictionary is by using either automatic translations of frequent words or word alignments. For instance, @cite_1 select the target word to which the source word is most frequently aligned in parallel corpora. @cite_17 use the 5,000 most frequent words from the source language with their translations. To investigate the impact of the dictionary on the embedding quality, @cite_4 evaluate different factors and conclude that carefully selecting highly reliable symmetric translation pairs improves the performance of benchmark word-translation tasks. The authors also demonstrate that increasing the lexicon size over 10,000 pairs show a slow and steady decrease in performance.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_17" ], "mid": [ "342285082", "2508069829", "2126725946" ], "abstract": [ "The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.", "", "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
CloudSim @cite_31 is perhaps the most popular cloud simulation platform available, which is used for modeling the cloud and application provisioning environments. It is a discrete, event-based simulator written in Java, meaning that it does not actually emulate network entities such as routers and switches. Instead, CloudSim uses a latency matrix which contains predefined values for the latency between entities in a virtual network. Additionally, CloudSim can model dynamic user workloads by exposing a set of methods and variables for VM-level resource requirements, and is an all-around tool for simulating and testing new cloud systems.
{ "cite_N": [ "@cite_31" ], "mid": [ "2045287414" ], "abstract": [ "Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
There are also many extensions to CloudSim, such as CloudSimSDN @cite_21 , ContainerCloudSim @cite_33 , and iFogSim @cite_2 , which attempt to broaden CloudSim's model to include SDN, Docker container migration simulations, and fog computing, respectively. However, because CloudSim and these associated extensions are strictly-simulation based, they utlimately do not solve the problems of cost and complexity associated with developing an actual edge-fog system. Rather, they simply avoid the problem altogether by simulating the entire system. Therefore, while CloudSim is a worthy platform for evaluating cloud architectures, load balancing algorithms, etc., it fails to actually serve as a valid edge-fog application development platform because . Likewise, the same can be said for most other simulation platforms for similar reasons.
{ "cite_N": [ "@cite_21", "@cite_33", "@cite_2" ], "mid": [ "1512071441", "2604186514", "2414114959" ], "abstract": [ "Software-Defined Networking not only addresses the shortcoming of traditional network technologies in dealing with frequent and immediate changes in cloud data centers but also made network resource management open and innovation-friendly. To further accelerate the innovation pace, accessible and easy-to-learn testbeds are required which estimate and measure the performance of network and host capacity provisioning approaches simultaneously within a data center. This is a challenging task and is often costly if accomplished in a physical environment. Thus, a lightweight and scalable simulation environment is necessary to evaluate the network allocation capacity policies while avoiding such a complicated and expensive facility. This paper introduces CloudSimSDN, a simulation framework for SDN-enabled cloud environments based on CloudSim. This paper develops and presents the overall architecture and features of the framework and provides several use cases. Moreover, we empirically validate the accuracy and effectiveness of CloudSimSDN through a number of simulations of a cloud-based three-tier web application.", "Summary Containers are increasingly gaining popularity and becoming one of the major deployment models in cloud environments. To evaluate the performance of scheduling and allocation policies in containerized cloud data centers, there is a need for evaluation environments that support scalable and repeatable experiments. Simulation techniques provide repeatable and controllable environments, and hence, they serve as a powerful tool for such purpose. This paper introduces ContainerCloudSim, which provides support for modeling and simulation of containerized cloud computing environments. We developed a simulation architecture for containerized clouds and implemented it as an extension of CloudSim. We described a number of use cases to demonstrate how one can plug in and compare their container scheduling and provisioning policies in terms of energy efficiency and SLA compliance. Our system is highly scalable as it supports simulation of large number of containers, given that there are more containers than virtual machines in a data center. Copyright © 2016 John Wiley & Sons, Ltd.", "Summary Internet of Things (IoT) aims to bring every object (eg, smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive volume of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay that is caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need an evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit of RAM consumption and execution time is verified under different circumstances." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
Typically, network resource management is accomplished using a load balancer, which attempts to find a suitable path to one or more destinations while optimally spreading traffic throughout the network to avoid congestion. In many cases, Equal-Cost Multi-Path (ECMP) routing is used to manage network resources by distributing traffic throughout the network. However, several authors, such as Katta @cite_12 and Zhang @cite_27 , suggest that ECMP's performance is far from optimal and that it is known to result in unevenly distributed network flows and poor performance. In response, Katta proposed Clove @cite_12 , a congestion-aware load balancer that works alongside ECMP by modifying encapsulation packet header fields to manipulate flow paths, ultimately providing lower Flow Completion Times (FCT) than ECMP. Similarly, Zhang proposed Hermes @cite_27 , a distributed load balancing system, which offers up to 10 Clove can handle link failures and topology asymmetry, Hermes can handle more advanced and complex uncertainties in the network such as packet black-holes and switch failures.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "2744698795", "2770706713" ], "abstract": [ "Production datacenters operate under various uncertainties such as traffic dynamics, topology asymmetry, and failures. Therefore, datacenter load balancing schemes must be resilient to these uncertainties; i.e., they should accurately sense path conditions and timely react to mitigate the fallouts. Despite significant efforts, prior solutions have important drawbacks. On the one hand, solutions such as Presto and DRB are oblivious to path conditions and blindly reroute at fixed granularity. On the other hand, solutions such as CONGA and CLOVE can sense congestion, but they can only reroute when flowlets emerge; thus, they cannot always react timely to uncertainties. To make things worse, these solutions fail to detect handle failures such as blackholes and random packet drops, which greatly degrades their performance. In this paper, we introduce Hermes, a datacenter load balancer that is resilient to the aforementioned uncertainties. At its heart, Hermes leverages comprehensive sensing to detect path conditions including failures unattended before, and it reacts using timely yet cautious rerouting. Hermes is a practical edge-based solution with no switch modification. We have implemented Hermes with commodity switches and evaluated it through both testbed experiments and large-scale simulations. Our results show that Hermes achieves comparable performance to CONGA and Presto in normal cases, and well handles uncertainties: under asymmetries, Hermes achieves up to 10 and 20 better flow completion time (FCT) than CONGA and CLOVE; under switch failures, it outperforms all other schemes by over 32 .", "Most datacenters still use Equal Cost Multi-Path (ECMP), which performs congestion-oblivious hashing of flows over multiple paths, leading to an uneven distribution of traffic. Alternatives to ECMP come with deployment challenges, as they require either changing the tenant VM network stacks (e.g., MPTCP) or replacing all of the switches (e.g., CONGA). We argue that the hypervisor provides a unique point for implementing load-balancing algorithms that are easy to deploy, while still reacting quickly to congestion. We propose Clove, a scalable load-balancer that (i) runs entirely in the hypervisor, requiring no modifications to tenant VM networking stacks or physical switches, and (ii) works on any topology and adapts quickly to topology changes and traffic shifts. Clove relies on standard ECMP in physical switches, discovers paths using a novel traceroute mechanism, uses software-based flowlet-switching, and continuously learns congestion (or path utilization) state using standard switch features. It then manipulates packet-header fields in the hypervisor switch to direct traffic over less congested paths. Clove achieves 1.5 to 7 times smaller flow-completion times at 70 network load than other load-balancing algorithms that work with existing hardware. Clove also captures some 80 of the performance gain of best-of-breed hardware-based load-balancing algorithms like CONGA that require new equipment." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
Resource allocation is key to the success of edge-fog systems, and many fog architectures involving automated resource allocation mechanisms have been proposed. Skarlat @cite_20 created a resource provisioning system for IoT services in fog networks using a fog-cloud middleware component. The middleware oversees the activity of fog colonies, which are micro data centers consisting of fog cells where tasks and data can be distributed and shared among the cells. This system merely manages fog computing resources, and it does not those resources, nor does it perform any allocation of network resources.
{ "cite_N": [ "@cite_20" ], "mid": [ "2565437603" ], "abstract": [ "The advent of the Internet of Things (IoT) leadsto the pervasion of business and private spaces with ubiquitous, networked computing devices. These devices do not simply actas sensors, but feature computational, storage, and networkingresources. These resources are close to the edge of the network, and it is a promising approach to exploit them in order to executeIoT services. This concept is known as fog computing.Despite existing theoretical foundations, the adoption of fogcomputing is still at its very beginning. Especially, there is alack of approaches for the leasing and releasing of resources. Toresolve this shortcoming, we present a conceptual framework forfog resource provisioning. We formalize an optimization problemwhich is able to take into account existing resources in fog IoTlandscapes. The goal of this optimization problem is to providedelay-sensitive utilization of available fog-based computationalresources. We evaluate the resource provisioning model to showthe benefits of our contributions. Our results show a decrease indelays of up to 39 compared to a baseline approach, yieldingshorter round-trip times and makespans." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
Yin @cite_10 built a novel task-scheduling algorithm and designed a resource reallocation algorithm for fog networks, specifically for real-time, smart manufacturing applications. However, unlike the previous work, a management software component is not used in their approach, and each fog node is burdened with the task of deciding whether to accept, reject, or send requests to the cloud. Resource reallocation is periodically run on a single fog node, reallocating resources among tasks in order to meet delay constraints. Their results show reduced task delays and improved resource utilization of fog nodes. Their experiments are strictly simulation-based and the resource management scheme only includes a single fog node during decision making.
{ "cite_N": [ "@cite_10" ], "mid": [ "2810048489" ], "abstract": [ "Fog computing has been proposed as an extension of cloud computing to provide computation, storage, and network services in network edge. For smart manufacturing, fog computing can provide a wealth of computational and storage services, such as fault detection and state analysis of devices in assembly lines, if the middle layer between the industrial cloud and the terminal device is considered. However, limited resources and low-delay services hinder the application of new virtualization technologies in the task scheduling and resource management of fog computing. Thus, we build a new task-scheduling model by considering the role of containers. Then, we construct a task-scheduling algorithm to ensure that the tasks are completed on time and the number of concurrent tasks for the fog node is optimized. Finally, we propose a reallocation mechanism to reduce task delays in accordance with the characteristics of the containers. The results showed that our proposed task-scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of the tasks in fog nodes." ] }
1907.03081
2954077899
With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to edge devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of applications suitable for the fog. In response to these challenges, we propose the FDK: A Fog Development Kit for software-defined edge-fog systems. By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables rapid development of edge-fog systems. Also, the FDK supports the utilization of virtualized devices to create a highly realistic emulation environment, allowing fog application prototypes to be built with zero additional costs and enabling portability to a physical infrastructure. We evaluate the resource allocation performance of the FDK using a testbed, including eight edge devices, four fog nodes, and five OpenFlow switches. Our evaluations show that the delay of resource allocation and deallocation is less than 279ms and 256ms for 95 of edge-fog transactions, respectively. Besides, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion.
Finally, the work that is perhaps most similar to the FDK is ENORM: The Edge Node Resource Management framework by Wang @cite_29 . Upon startup of the system, an edge manager software installed on all edge nodes gathers and stores available system resources. Then, each edge node listens for resource requests from a cloud manager software installed on a cloud server. Each resource request starts with a handshaking process that eventually leads to the initialization of a fog application. In contrast, edge nodes in our proposed FDK do not receive requests, but instead create and send them to a FDK instance running on a centralized controller. If accepted, the FDK then leverages containerization and SDN technologies to perform both fog node and network resource allocation, ensuring timely execution of services requested by edge devices.
{ "cite_N": [ "@cite_29" ], "mid": [ "2755775376" ], "abstract": [ "Current computing techniques using the cloud as a centralised server will become untenable as billions of devices get connected to the Internet. This raises the need for fog computing, which leverages computing at the edge of the network on nodes, such as routers, base stations and switches, along with the cloud. However, to realise fog computing the challenge of managing edge nodes will need to be addressed. This paper is motivated to address the resource management challenge. We develop the first framework to manage edge nodes, namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for provisioning and auto-scaling edge node resources are proposed. The feasibility of the framework is demonstrated on a PokeMon Go-like online game use-case. The benefits of using ENORM are observed by reduced application latency between 20 -80 and reduced data transfer and communication frequency between the edge node and the cloud by up to 95 . These results highlight the potential of fog computing for improving the quality of service and experience." ] }
1907.02908
2913403708
Many deep reinforcement learning algorithms contain inductive biases that sculpt the agent's objective and its interface to the environment. These inductive biases can take many forms, including domain knowledge and pretuned hyper-parameters. In general, there is a trade-off between generality and performance when algorithms use such biases. Stronger biases can lead to faster learning, but weaker biases can potentially lead to more general algorithms. This trade-off is important because inductive biases are not free; substantial effort may be required to obtain relevant domain knowledge or to tune hyper-parameters effectively. In this paper, we re-examine several domain-specific components that bias the objective and the environmental interface of common deep reinforcement learning agents. We investigated whether the performance deteriorates when these components are replaced with adaptive solutions from the literature. In our experiments, performance sometimes decreased with the adaptive components, as one might expect when comparing to components crafted for the domain, but sometimes the adaptive components performed better. We investigated the main benefit of having fewer domain-specific components, by comparing the learning performance of the two systems on a different set of continuous control problems, without additional tuning of either system. As hypothesized, the system with adaptive components performed better on many of the new tasks.
The present work was partially inspired by the work of @cite_1 in the context of Go. They demonstrated that specific domain specific heuristics (e.g. pretraining on human data, the use of handcrafted Go-specific features, and exploitation of certain symmetries in state space), while originally introduced to simplify learning , had actually outlived their usefulness: taking a approach, even stronger Go agents could be trained. Importantly, they showed removing these domain heuristics, the same algorithm could master other games, such as Shogi and Chess. In our paper, we adopted a similar philosophy but investigated the very different set of domain specific heuristics, that are used in more traditional deep reinforcement learning agents.
{ "cite_N": [ "@cite_1" ], "mid": [ "2963403143" ], "abstract": [ "The Arcade Learning Environment (ALE) is an evaluation platform that poses the challenge of building AI agents with general competency across dozens of Atari 2600 games. It supports a variety of different problem settings and it has been receiving increasing attention from the scientific community, leading to some high-profile success stories such as the much publicized Deep Q-Networks (DQN). In this article we take a big picture look at how the ALE is being used by the research community. We show how diverse the evaluation methodologies in the ALE have become with time, and highlight some key concerns when evaluating agents in the ALE. We use this discussion to present some methodological best practices and provide new benchmark results using these best practices. To further the progress in the field, we introduce a new version of the ALE that supports multiple game modes and provides a form of stochasticity we call sticky actions. We conclude this big picture look by revisiting challenges posed when the ALE was introduced, summarizing the state-of-the-art in various problems and highlighting problems that remain open." ] }
1907.02908
2913403708
Many deep reinforcement learning algorithms contain inductive biases that sculpt the agent's objective and its interface to the environment. These inductive biases can take many forms, including domain knowledge and pretuned hyper-parameters. In general, there is a trade-off between generality and performance when algorithms use such biases. Stronger biases can lead to faster learning, but weaker biases can potentially lead to more general algorithms. This trade-off is important because inductive biases are not free; substantial effort may be required to obtain relevant domain knowledge or to tune hyper-parameters effectively. In this paper, we re-examine several domain-specific components that bias the objective and the environmental interface of common deep reinforcement learning agents. We investigated whether the performance deteriorates when these components are replaced with adaptive solutions from the literature. In our experiments, performance sometimes decreased with the adaptive components, as one might expect when comparing to components crafted for the domain, but sometimes the adaptive components performed better. We investigated the main benefit of having fewer domain-specific components, by comparing the learning performance of the two systems on a different set of continuous control problems, without additional tuning of either system. As hypothesized, the system with adaptive components performed better on many of the new tasks.
There are other two features of our algorithm that, despite not incorporating quite as much domain knowledge as the heuristics discussed in this paper, also constitute a potential impediment to its generality and scalability. 1) The use of parallel environments is not always feasible in practice, especially in real world applications (although recent work on robot farms @cite_5 shows that it might still be a valid approach when sufficient resources are available). 2) The use of back-propagation through time for training recurrent state representations constrains the length of the temporal relationship that we can learn, since the memory consumption is linear in the length of the rollouts. Further work in overcoming these limitations, successfully learning online from a single stream of experience, is a fruitful direction for future research.
{ "cite_N": [ "@cite_5" ], "mid": [ "2157864803" ], "abstract": [ "Many practitioners of reinforcement learning problems have observed that oftentimes the performance of the agent reaches very close to the optimal performance even though the estimated (action-)value function is still far from the optimal one. The goal of this paper is to explain and formalize this phenomenon by introducing the concept of the action-gap regularity. As a typical result, we prove that for an agent following the greedy policy ( ) with respect to an action-value function @math , the performance loss @math is upper bounded by @math , in which ζ ≥ = 0) is the parameter quantifying the action-gap regularity. For ζ > 0, our results indicate smaller performance loss compared to what previous analyses had suggested. Finally, we show how this regularity affects the performance of the family of approximate value iteration algorithms." ] }
1907.02874
2953849475
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
Glatt al @cite_10 train a DQN on a source task and investigate how the learned weights, which are used as initialization for a target task, alter the performance. In a similar manner, @cite_30 @cite_13 @cite_15 show that some transfer is possible by simply training one network on multiple tasks. However, since these algorithms do not incorporate any task-specific weights, the best that can be done is to interpolate between conflicting tasks. In contrast, our method allows conflicting tasks to be learned in separate networks.
{ "cite_N": [ "@cite_30", "@cite_15", "@cite_10", "@cite_13" ], "mid": [ "", "2891076394", "2585821313", "2786036274" ], "abstract": [ "", "The reinforcement learning (RL) community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab.", "Driven by recent developments in the area of Artificial Intelligence research, a promising new technology for building intelligent agents has evolved. The technology is termed Deep Reinforcement Learning (DRL) and combines the classic field of Reinforcement Learning (RL) with the representational power of modern Deep Learning approaches. DRL enables solutions for difficult and high dimensional tasks, such as Atari game playing, for which previously proposed RL methods were inadequate. However, these new solution approaches still take a long time to learn how to actuate in such domains and so far are mainly researched for single task scenarios. The ability to generalize gathered knowledge and transfer it to another task has been researched for classical RL, but remains an open problem for the DRL domain. Consequently, in this article we evaluate under which conditions the application of Transfer Learning (TL) to the DRL domain improves the learning of a new task. Our results indicate that TL can greatly accelerate DRL when transferring knowledge from similar tasks, and that the similarity between tasks plays a key role in the success or failure of knowledge transfer.", "In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time, which is already a problem in single task learning. We have developed a new distributed agent IMPALA (Importance-Weighted Actor Learner Architecture) that can scale to thousands of machines and achieve a throughput rate of 250,000 frames per second. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace, which was critical for achieving learning stability. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (, 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (, 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents, use less data and crucially exhibits positive transfer between tasks as a result of its multi-task approach." ] }
1907.02874
2953849475
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
One interesting line of research @cite_8 @cite_7 @cite_16 @cite_11 @cite_9 capitalizes on transferring knowledge based on successor features, i.e., shared environment dynamics. In contrast, our method does not rely on shared environment dynamics nor action alignment across tasks.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_16", "@cite_11" ], "mid": [ "2963019567", "2605369401", "2914694967", "2962717849", "2951871955" ], "abstract": [ "In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.", "This article addresses a particular Transfer Reinforcement Learning (RL) problem: when dynamics do not change from one task to another, and only the reward function does. Our method relies on two ideas, the first one is that transition samples obtained from a task can be reused to learn on any other task: an immediate reward estimator is learnt in a supervised fashion and for each sample, the reward entry is changed by its reward estimate. The second idea consists in adopting the optimism in the face of uncertainty principle and to use upper bound reward estimates. Our method is tested on a navigation task, under four Transfer RL experimental settings: with a known reward function, with strong and weak expert knowledge on the reward function, and with a completely unknown reward function. It is also evaluated in a Multi-Task RL experiment and compared with the state-of-the-art algorithms. Results reveal that this method constitutes a major improvement for transfer multi-task problems that share dynamics.", "One key challenge in reinforcement learning is the ability to generalize knowledge in control problems. While deep learning methods have been successfully combined with model-free reinforcement-learning algorithms, how to perform model-based reinforcement learning in the presence of approximation errors still remains an open problem. Using successor features, a feature representation that predicts a temporal constraint, this paper presents three contributions: First, it shows how learning successor features is equivalent to model-free learning. Then, it shows how successor features encode model reductions that compress the state space by creating state partitions of bisimilar states. Using this representation, an intelligent agent is guaranteed to accurately predict future reward outcomes, a key property of model-based reinforcement-learning algorithms. Lastly, it presents a loss objective and prediction error bounds showing that accurately predicting value functions and reward sequences is possible with an approximation of successor features. On finite control problems, we illustrate how minimizing this loss objective results in approximate bisimulations. The results presented in this paper provide a novel understanding of representations that can support model-free and model-based reinforcement learning.", "Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment's dynamics remain the same. Our approach rests on two key ideas: \"successor features\", a value function representation that decouples the dynamics of the environment from the rewards, and \"generalized policy improvement\", a generalization of dynamic programming's policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm.", "The ability to transfer skills across tasks has the potential to scale up reinforcement learning (RL) agents to environments currently out of reach. Recently, a framework based on two ideas, successor features (SFs) and generalised policy improvement (GPI), has been introduced as a principled way of transferring skills. In this paper we extend the SFs & GPI framework in two ways. One of the basic assumptions underlying the original formulation of SFs & GPI is that rewards for all tasks of interest can be computed as linear combinations of a fixed set of features. We relax this constraint and show that the theoretical guarantees supporting the framework can be extended to any set of tasks that only differ in the reward function. Our second contribution is to show that one can use the reward functions themselves as features for future tasks, without any loss of expressiveness, thus removing the need to specify a set of features beforehand. This makes it possible to combine SFs & GPI with deep learning in a more stable way. We empirically verify this claim on a complex 3D environment where observations are images from a first-person perspective. We show that the transfer promoted by SFs & GPI leads to very good policies on unseen tasks almost instantaneously. We also describe how to learn policies specialised to the new tasks in a way that allows them to be added to the agent's set of skills, and thus be reused in the future." ] }
1907.02874
2953849475
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
Czarnecki al @cite_19 use multiple networks similar to our approach. However, their focus is on automated curriculum learning. Therefore they adjust the policy mixing weights through population based training @cite_23 while we learn attention weights conditioned on the task state.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2803740478", "2770298516" ], "abstract": [ "We introduce MixM using our method to progress through an action-space curriculum we achieve both faster training and better final performance than one obtains using traditional methods. (2) We further show that M&M can be used successfully to progress through a curriculum of architectural variants defining an agents internal state. (3) Finally, we illustrate how a variant of our method can be used to improve agent performance in a multitask setting.", "Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance." ] }
1907.02874
2953849475
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
Rusu al @cite_17 introduce (PNN), an effective approach for learning in a sequential multi-task setting. In PNN, a new network and lateral connections for each additional task are added in order to enable knowledge transfer, which speeds up the training of subsequent tasks. The additional network parts let the architecture grow super-linearly, while our network scales economically with an increasing number of tasks. Another strong approach is introduced by Teh al @cite_1 . Their algorithm, , learns multiple tasks at once by sharing knowledge through a distillation process of an additional shared policy network. In contrast to our approach, this requires an aligned action space and a separate network for each task. We compare against Distral and PNN in our experiments.
{ "cite_N": [ "@cite_1", "@cite_17" ], "mid": [ "2963199420", "2426267443" ], "abstract": [ "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a \"distilled\" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.", "Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index." ] }
1907.03143
2954554213
Knowledge graphs (KGs) typically contain temporal facts indicating relationships among entities at different times. Due to their incompleteness, several approaches have been proposed to infer new facts for a KG based on the existing ones-a problem known as KG completion. KG embedding approaches have proved effective for KG completion, however, they have been developed mostly for static KGs. Developing temporal KG embedding models is an increasingly important problem. In this paper, we build novel models for temporal KG completion through equipping static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time. This is in contrast to the existing temporal KG embedding approaches where only static entity features are provided. The proposed embedding function is model-agnostic and can be potentially combined with any static model. We prove that combining it with SimplE, a recent model for static KG embedding, results in a fully expressive model for temporal KG completion. Our experiments indicate the superiority of our proposal compared to existing baselines.
Statistical relational AI (StaRAI) @cite_9 @cite_13 approaches are mainly based on soft (hanf-crafted or learned) rules @cite_1 @cite_26 @cite_57 @cite_62 where the probability of a world is typically proportional to the number of rules that are satisfied violated in that world and the confidence for each rule. A line of work in this area combines a stack of soft rules with embeddings for property prediction @cite_30 @cite_2 . Another line of work extends the soft rules to temporal KGs @cite_32 @cite_45 @cite_33 @cite_37 @cite_19 @cite_14 . The approaches based on soft rules have been generally shown to perform subpar to KG embedding models @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_62", "@cite_33", "@cite_14", "@cite_7", "@cite_9", "@cite_1", "@cite_32", "@cite_57", "@cite_19", "@cite_45", "@cite_2", "@cite_13" ], "mid": [ "1826836734", "2606488140", "1824971879", "2185448989", "2133973199", "2780770411", "1529533208", "2320648065", "1977970897", "2105042294", "92973652", "2604713576", "2119421136", "2963109792", "2598224856" ], "abstract": [ "We propose a method combining relational-logic representations with neural network learning. A general lifted architecture, possibly reflecting some background domain knowledge, is described through relational rules which may be handcrafted or learned. The relational rule-set serves as a template for unfolding possibly deep neural networks whose structures also reflect the structures of given training or testing relational examples. Different networks corresponding to different examples share their weights, which co-evolve during training by stochastic gradient descent algorithm. The framework allows for hierarchical relational modeling constructs and learning of latent relational concepts through shared hidden layers weights corresponding to the rules. Discovery of notable relational concepts and experiments on 78 relational learning benchmarks demonstrate favorable performance of the method.", "A probabilistic temporal knowledge base contains facts that are annotated with a time interval and a confidence score. The interval defines the time span for which it can be assumed that the fact is true with a probability that is expressed by the confidence score. Given a probabilistic temporal knowledge base, we propose the use of Markov Logic in combination with Allen’s interval calculus to select the most probable consistent subset of facts by computing the MAP state. We apply our approach on a specific domain of DBpedia, namely the domain of academics. We simulate a scenario of extending a knowledge base automatically in an open setting by adding erroneous facts to the facts stated in DBpedia. Our results in- dicate that we can eliminate a large fraction of these errors without removing too many correctly stated facts.", "We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query, which corresponds to the probability that the query succeeds in a randomly sampled program. The key contribution of this paper is the introduction of an effective solver for computing success probabilities. It essentially combines SLD-resolution with methods for computing the probability of Boolean formulae. Our implementation further employs an approximation algorithm that combines iterative deepening with binary decision diagrams. We report on experiments in the context of discovering links in real biological networks, a demonstration of the practical usefulness of the approach.", "Logistic regression is a commonly used representation for aggregators in Bayesian belief networks when a child has multiple parents. In this paper we consider extending logistic regression to relational models, where we want to model varying populations and interactions among parents. In this paper, we first examine the representational problems caused by population variation. We show how these problems arise even in simple cases with a single parametrized parent, and propose a linear relational logistic regression which we show can represent arbitrary linear (in population size) decision thresholds, whereas the traditional logistic regression cannot. Then we examine representing interactions among the parents of a child node, and representing non-linear dependency on population size. We propose a multi-parent relational logistic regression which can represent interactions among parents and arbitrary polynomial decision thresholds. Finally, we show how other well-known aggregators can be represented using this relational logistic regression.", "Temporal annotations of facts are a key component both for building a high-accuracy knowledge base and for answering queries over the resulting temporal knowledge base with high precision and recall. In this paper, we present a temporal-probabilistic database model for cleaning uncertain temporal facts obtained from information extraction methods. Specifically, we consider a combination of temporal deduction rules, temporal consistency constraints and probabilistic inference based on the common possible-worlds semantics with data lineage, and we study the theoretical properties of this data model. We further develop a query engine which is capable of scaling to very large temporal knowledge bases, with nearly interactive query response times over millions of uncertain facts and hundreds of thousands of grounded rules. Our experiments over two real-world datasets demonstrate the increased robustness of our approach compared to related techniques based on constraint solving via Integer Linear Programming (ILP) and probabilistic inference via Markov Logic Networks (MLNs). We are also able to show that our runtime performance is more than competitive to current ILP solvers and the fastest available, probabilistic but non-temporal, database engines.", "Time-wise knowledge is relevant in knowledge graphs as the majority facts are true in some time period, for instance, (Barack Obama, president of, USA, 2009, 2017). Consequently, temporal information extraction and temporal scoping of facts in knowledge graphs have been a focus of recent research. Due to this, a number of temporal knowledge graphs have become available such as YAGO and Wikidata. In addition, since the temporal facts are obtained from open text, they can be weighted, i.e., the extraction tools assign each fact with a confidence score indicating how likely that fact is to be true. Temporal facts coupled with confidence scores result in a probabilistic temporal knowledge graph. In such a graph, probabilistic query evaluation (marginal inference) and computing most probable explanations (MPE inference) are fundamental problems. In addition, in these problems temporal coalescing, an important research in temporal databases, is very challenging. In this work, we study these problems by using probabilistic programming. We report experimental results comparing the efficiency of several state of the art systems.", "Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.", "An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "Recent research has shown that surprisingly rich models of human behavior can be learned from GPS (positional) data. However, most research to date has concentrated on modeling single individuals or aggregate statistical properties of groups of people. Given noisy real-world GPS data, we—in contrast—consider the problem of modeling and recognizing activities that involve multiple related individuals playing a variety of roles. Our test domain is the game of capture the flag—an outdoor game that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statistical relational language, and learn a theory that jointly denoises the data and infers occurrences of high-level activities, such as capturing a player. Our model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multi-agent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the future behaviors of the people involved as well as the events that could have preceded it. We compare our unified approach with three alternatives (both probabilistic and nonprobabilistic) where either the denoising of the GPS data and the detection of the high-level activities are strictly separated, or the states of the players are not considered, or both. We show that the unified approach with the time window spanning the entire game, although more computationally costly, is significantly more accurate.", "Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This paper provides an overview of the PSL language and its techniques for inference and weight learning. An implementation of PSL is available at http: psl.umiacs.umd.edu .", "The management of uncertainty is crucial when harvesting structured content from unstructured and noisy sources. Knowledge Graphs ( KGs ) are a prominent example. KGs maintain both numerical and non-numerical facts, with the support of an underlying schema. These facts are usually accompanied by a confidence score that witnesses how likely is for them to hold. Despite their popularity, most of existing KGs focus on static data thus impeding the availabilityof timewise knowledge. What is missing is a comprehensive solution for the management of uncertain and temporal data in KGs . The goal of this paper is to fill this gap. We rely on two main ingredients. The first is a numerical extension of Markov Logic Networks (MLNs) that provide the necessary underpinning to formalize the syntax and semantics of uncertain temporal KGs . The second is a set of Datalog constraints with inequalities that extend the underlying schema of the KGs and help to detect inconsistencies. From a theoretical point of view, we discuss the complexity of two important classes of queries for uncertain temporal KGs: maximuma-posteriori and conditional probability inference. Due to the hardness of these problems and the fact that MLN solvers do not scale well, we also explore the usage of Probabilistic Soft Logics (PSL) as a practical tool to support our reasoning tasks. We report on an experimental evaluation comparing the MLN and PSL approaches.", "Markov logic is a widely used tool in statistical relational learning, which uses a weighted first-order logic knowledge base to specify a Markov random field (MRF) or a conditional random field (CRF). In many applications, a Markov logic network (MLN) is trained in one domain, but used in a different one. This paper focuses on dynamic Markov logic networks, where the size of the discretized time-domain typically varies between training and testing. It has been previously pointed out that the marginal probabilities of truth assignments to ground atoms can change if one extends or reduces the domains of predicates in an MLN. We show that in addition to this problem, the standard way of unrolling a Markov logic theory into a MRF may result in time-inhomogeneity of the underlying Markov chain. Furthermore, even if these representational problems are not significant for a given domain, we show that the more practical problem of generating samples in a sequential conditional random field for the next slice relying on the samples from the previous slice has high computational cost in the general case, due to the need to estimate a normalization factor for each sample. We propose a new discriminative model, slice normalized dynamic Markov logic networks (SN-DMLN), that suffers from none of these issues. It supports efficient online inference, and can directly model influences between variables within a time slice that do not have a causal direction, in contrast with fully directed models (e.g., DBNs). Experimental results show an improvement in accuracy over previous approaches to online inference in dynamic Markov logic networks.", "", "" ] }
1907.03143
2954554213
Knowledge graphs (KGs) typically contain temporal facts indicating relationships among entities at different times. Due to their incompleteness, several approaches have been proposed to infer new facts for a KG based on the existing ones-a problem known as KG completion. KG embedding approaches have proved effective for KG completion, however, they have been developed mostly for static KGs. Developing temporal KG embedding models is an increasingly important problem. In this paper, we build novel models for temporal KG completion through equipping static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time. This is in contrast to the existing temporal KG embedding approaches where only static entity features are provided. The proposed embedding function is model-agnostic and can be potentially combined with any static model. We prove that combining it with SimplE, a recent model for static KG embedding, results in a fully expressive model for temporal KG completion. Our experiments indicate the superiority of our proposal compared to existing baselines.
These approaches define weighted template walks on a KG and then answer queries by template matching @cite_5 @cite_4 . They have been shown to be quite similar to, and in some cases subsumed by, the models based on soft rules @cite_42 .
{ "cite_N": [ "@cite_5", "@cite_42", "@cite_4" ], "mid": [ "2029249040", "2789351899", "1756422141" ], "abstract": [ "Scientific literature with rich metadata can be represented as a labeled directed graph. This graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition (NER) to be formulated as typed proximity queries in the graph. One popular proximity measure is called Random Walk with Restart (RWR), and much work has been done on the supervised learning of RWR measures by associating each edge label with a parameter. In this paper, we describe a novel learnable proximity measure which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple \"path experts\", each corresponding to following a particular sequence of labeled edges. Experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the RWR model (both trained and untrained). We also extend the method to support two additional types of experts to model intrinsic properties of entities: query-independent experts, which generalize the PageRank measure, and popular entity experts which allow rankings to be adjusted for particular entities that are especially important.", "The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this paper, we study the relationship between the path ranking algorithm (PRA), one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR), one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.", "We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks." ] }
1907.03143
2954554213
Knowledge graphs (KGs) typically contain temporal facts indicating relationships among entities at different times. Due to their incompleteness, several approaches have been proposed to infer new facts for a KG based on the existing ones-a problem known as KG completion. KG embedding approaches have proved effective for KG completion, however, they have been developed mostly for static KGs. Developing temporal KG embedding models is an increasingly important problem. In this paper, we build novel models for temporal KG completion through equipping static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time. This is in contrast to the existing temporal KG embedding approaches where only static entity features are provided. The proposed embedding function is model-agnostic and can be potentially combined with any static model. We prove that combining it with SimplE, a recent model for static KG embedding, results in a fully expressive model for temporal KG completion. Our experiments indicate the superiority of our proposal compared to existing baselines.
A large number of models have been developed for static KG embedding. A class of these models are the translational approaches corresponding to variations of TransE (see, e.g., @cite_49 @cite_15 @cite_3 ). Another class of approaches are based on a bilinear score function @math each imposing a different sparsity constraint on the @math matrices (see, e.g., @cite_34 @cite_6 @cite_48 @cite_58 @cite_40 ). A third class of models are based on deep learning approaches using feed-forward or convolutional layers on top of the embeddings (see, e.g., @cite_23 @cite_28 @cite_63 @cite_17 ). These models can be potentially extended to TKGC through our diachronic embedding.
{ "cite_N": [ "@cite_28", "@cite_48", "@cite_58", "@cite_3", "@cite_6", "@cite_40", "@cite_49", "@cite_23", "@cite_63", "@cite_15", "@cite_34", "@cite_17" ], "mid": [ "2016753842", "2145544171", "2962850650", "2463781041", "2963432357", "2964140943", "2184957013", "2127426251", "2964116313", "2283196293", "205829674", "2888572441" ], "abstract": [ "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, HOLE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. Experimentally, we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.", "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links between the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement (which we call SimplE) of CP to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge in terms of logical rules can be incorporated into these embeddings through weight tying. We prove SimplE is fully-expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques.", "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.", "In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.", "", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.", "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models - which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree - which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set - however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets - deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "Knowledge graphs are graphical representations of large databases of facts, which typically suffer from incompleteness. Inferring missing relations (links) between entities (nodes) is the task of link prediction. A recent state-of-the-art approach to link prediction, ConvE, implements a convolutional neural network to extract features from concatenated subject and relation vectors. Whilst results are impressive, the method is unintuitive and poorly understood. We propose a hypernetwork architecture that generates simplified relation-specific convolutional filters that (i) outperforms ConvE and all previous approaches across standard datasets; and (ii) can be framed as tensor factorization and thus set within a well established family of factorization models for link prediction. We thus demonstrate that convolution simply offers a convenient computational means of introducing sparsity and parameter tying to find an effective trade-off between non-linear expressiveness and the number of parameters to learn." ] }
1907.03143
2954554213
Knowledge graphs (KGs) typically contain temporal facts indicating relationships among entities at different times. Due to their incompleteness, several approaches have been proposed to infer new facts for a KG based on the existing ones-a problem known as KG completion. KG embedding approaches have proved effective for KG completion, however, they have been developed mostly for static KGs. Developing temporal KG embedding models is an increasingly important problem. In this paper, we build novel models for temporal KG completion through equipping static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time. This is in contrast to the existing temporal KG embedding approaches where only static entity features are provided. The proposed embedding function is model-agnostic and can be potentially combined with any static model. We prove that combining it with SimplE, a recent model for static KG embedding, results in a fully expressive model for temporal KG completion. Our experiments indicate the superiority of our proposal compared to existing baselines.
The idea behind our proposed embeddings is similar to diachronic word embeddings where a corpus is typically broken temporally into slices (e.g., 20-year chuncks of a 200-year corpus) and embeddings are learned for words in each chunk thus providing word embeddings that are a function of time (see, e.g., @cite_21 @cite_51 @cite_31 @cite_20 ). The main goal of diachronic word embeddings is to reveal how the meanings of the words have evolved over time. Our work can be viewed as an extension of diachronic word embeddings to continuous-time KG completion.
{ "cite_N": [ "@cite_31", "@cite_21", "@cite_51", "@cite_20" ], "mid": [ "2416513196", "2951300178", "1570098300", "2964231305" ], "abstract": [ "Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic change by evaluating word embeddings (PPMI, SVD, word2vec) against known historical changes. We then use this methodology to reveal statistical laws of semantic evolution. Using six historical corpora spanning four languages and two centuries, we propose two quantitative laws of semantic change: (i) the law of conformity---the rate of semantic change scales with an inverse power-law of word frequency; (ii) the law of innovation---independent of frequency, words that are more polysemous have higher rates of semantic change.", "We provide a method for automatically detecting change in language across time through a chronologically trained neural language model. We train the model on the Google Books Ngram corpus to obtain word vector representations specific to each year, and identify words that have changed significantly from 1900 to 2009. The model identifies words such as \"cell\" and \"gay\" as having changed during that time period. The model simultaneously identifies the specific years during which such words underwent change.", "We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especially prevalent on the Internet, where the rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis approach constructs property time series of word usage, and then uses statistically sound change point detection algorithms to identify significant linguistic shifts. We consider and analyze three approaches of increasing complexity to generate such linguistic property time series, the culmination of which uses distributional characteristics inferred from word co-occurrences. Using recently proposed deep neural language models, we first train vector representations of words for each time period. Second, we warp the vector spaces into one unified coordinate system. Finally, we construct a distance-based distributional time series for each word to track its linguistic displacement over time. We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging using Twitter, a decade of product reviews using a corpus of movie reviews from Amazon, and a century of written books using the Google Book Ngrams. Our analysis reveals interesting patterns of language usage change commensurate with each medium.", "We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time. The model represents words and contexts by latent trajectories in an embedding space. At each moment in time, the embedding vectors are inferred from a probabilistic version of word2vec (, 2013b). These embedding vectors are connected in time through a latent diffusion process. We describe two scalable variational inference algorithms—skip-gram smoothing and skip-gram filtering—that allow us to train the model jointly over all times; thus learning on all data while simultaneously allowing word and context vectors to drift. Experimental results on three different corpora demonstrate that our dynamic model infers word embedding trajectories that are more interpretable and lead to higher predictive likelihoods than competing methods that are based on static models trained separately on time slices." ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
DNN weight pruning includes two major categories: the general, pruning @cite_31 @cite_41 @cite_7 @cite_11 @cite_12 @cite_2 where arbitrary weight can be pruned, and pruning @cite_13 @cite_31 @cite_32 @cite_16 @cite_39 that maintains certain regularity. Non-structured pruning can result in a higher pruning rate (weight reduction). However, as weight storage is in a sparse matrix format with indices, it often results in performance degradation in highly parallel implementations like GPUs. This limitation can be overcome in structured weight pruning.
{ "cite_N": [ "@cite_7", "@cite_41", "@cite_32", "@cite_39", "@cite_2", "@cite_31", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2963674932", "2507318699", "2891561769", "2884180697", "", "2701719801", "2963363373", "2513419314", "", "2798170643" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL", "Deep Convolutional Neural Networks (CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely , to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies -- global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Two-phase pruning, that is, combining both global and layer-wise strategies, achieves 10 X FLOPs reduction and 46 inference time reduction on VGG-16, with 2 accuracy drop.", "Weight pruning methods of deep neural networks have been demonstrated to achieve a good model pruning ratio without loss of accuracy, thereby alleviating the significant computation storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, the pruning ratio and GPU acceleration are limited when accuracy needs to be maintained. In this work, we overcome pruning ratio and GPU acceleration limitations by proposing a unified, systematic framework of structured weight pruning for DNNs, named ADAM-ADMM. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. A significant improvement in structured weight pruning ratio is achieved without loss of accuracy, along with fast convergence rate. With a small sparsity degree of 33.3 on the convolutional layers, we achieve 1.64 accuracy enhancement for the AlexNet model. This is obtained by mitigation of overfitting. Without loss of accuracy on the AlexNet model, we achieve 2.58x and 3.65x average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 2.77x and 7.5x when allowing a moderate accuracy loss of 2 . In this case the model compression for convolutional layers is 13.2x, corresponding to 10.5x CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework. Our models and codes are released at this https URL", "", "This paper aims to simultaneously accelerate and compress off-the-shelf CNN models via filter pruning strategy. The importance of each filter is evaluated by the proposed entropy-based method first. Then several unimportant filters are discarded to get a smaller CNN model. Finally, fine-tuning is adopted to recover its generalization ability which is damaged during filter pruning. Our method can reduce the size of intermediate activations, which would dominate most memory footprint during model training stage but is less concerned in previous compression methods. Experiments on the ILSVRC-12 benchmark demonstrate the effectiveness of our method. Compared with previous filter importance evaluation criteria, our entropy-based method obtains better performance. We achieve 3.3x speed-up and 16.64x compression on VGG-16, 1.54x acceleration and 1.47x compression on ResNet-50, both with about 1 top-5 accuracy decrease.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL", "", "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate." ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
Figure illustrates three structured pruning schemes on the CONV layers of DNN: , , and (a.k.a. ), removing whole filter(s), channel(s), and the same location in each filter in each layer. CONV operations in DNNs are commonly transformed to matrix multiplications by converting weight tensors and feature map tensors to matrices @cite_13 , named (GEMM). The key advantage of structured pruning is that a full matrix will be maintained in GEMM with dimensionality reduction, without the need of indices, thereby facilitating hardware implementations.
{ "cite_N": [ "@cite_13" ], "mid": [ "2513419314" ], "abstract": [ "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL" ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
It is also worth mentioning that filter pruning and channel pruning are correlated @cite_16 , as pruning a filter in layer @math (after batch norm) results in the removal of corresponding channel in layer @math . The relationship in ResNet @cite_9 and MobileNet @cite_22 will be more complicated due to bypass links.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_22" ], "mid": [ "2194775991", "2963363373", "2963163009" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters." ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
Alternating Direction Method of Multipliers (ADMM) is a powerful mathematical optimization technique, by decomposing an original problem into two subproblems that can be solved separately and efficiently @cite_33 . Consider the general optimization problem @math . In ADMM, it is decomposed into two subproblems on @math and @math ( @math is an auxiliary variable), to be solved iteratively until convergence. The first subproblem derives @math given @math : @math . The second subproblem derives @math given @math : @math . Both @math and @math are quadratic functions.
{ "cite_N": [ "@cite_33" ], "mid": [ "2164278908" ], "abstract": [ "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations." ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
As a key property, ADMM can effectively deal with a subset of combinatorial constraints and yield optimal (or at least high quality) solutions. The associated constraints in DNN weight pruning (both non-structured and structured) belong to this subset @cite_10 @cite_5 . In DNN weight pruning problem, @math is loss function of DNN and the first subproblem is DNN training with dynamic regularization, which can be solved using current gradient descent techniques and solution tools @cite_14 @cite_6 for DNN training. @math corresponds to the combinatorial constraints on the number of weights. As the result of the compatibility with ADMM, the second subproblem has optimal, analytical solution for weight pruning via Euclidean projection. This solution framework applies both to non-structured and different variations of structured pruning schemes.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_10", "@cite_6" ], "mid": [ "2765313807", "1522301498", "2295652899", "" ], "abstract": [ "In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers. Compared to the first-order gradient-based online algorithm, we show that ZOO-ADMM requires @math times more iterations, leading to a convergence rate of @math , where @math is the number of optimization variables, and @math is the number of iterations. To accelerate ZOO-ADMM, we propose two minibatch strategies: gradient sample averaging and observation averaging, resulting in an improved convergence rate of @math , where @math is the minibatch size. In addition to convergence analysis, we also demonstrate ZOO-ADMM to applications in signal processing, statistics, and machine learning.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.", "The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules.", "" ] }
1907.03141
2953797043
Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoSlim, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoSlim is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoSlim outperforms the prior work on automatic model compression by up to 33 @math in pruning rate under the same accuracy. We release all models of this work at anonymous link: this http URL.
Many recent work have investigated the concept of (AutoML), i.e., using machine learning for hyperparameter determination in DNNs. Neural architecture search (NAS) @cite_36 @cite_0 @cite_18 is an representative application of AutoML. NAS has been deployed in Google’s Cloud AutoML framework, which frees customers from the time-consuming DNN architecture design process. The most related prior work, AMC @cite_1 , applies AutoML for DNN weight pruning, leveraging a similar DRL framework as Google AutoML to generate weight pruning rate for each layer of the target DNN. In conventional machine learning methods, the overall performance (accuracy) depends greatly on the quality of features @cite_35 . To reduce the burdensome manual feature selection process, automated feature engineering @cite_28 learns to generate appropriate feature set in order to improve the performance of corresponding machine learning tools.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_36", "@cite_28", "@cite_1", "@cite_0" ], "mid": [ "", "2771727678", "2553303224", "", "2949941638", "2951886768" ], "abstract": [ "", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "", "Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4x FLOPs reduction, we achieved 2.7 better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81x speedup of measured inference latency on an Android phone and 1.43x speedup on the Titan XP GPU, with only 0.1 loss of ImageNet Top-1 accuracy.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks." ] }
1907.03030
2964184973
This paper targets the problem of image set-based face verification and identification. Unlike traditional single media (an image or video) setting, we encounter a set of heterogeneous contents containing orderless images and videos. The importance of each image is usually considered either equal or based on their independent quality assessment. How to model the relationship of orderless images within a set remains a challenge. We address this problem by formulating it as a Markov Decision Process (MDP) in the latent space. Specifically, we first present a dependency-aware attention control (DAC) network, which resorts to actor-critic reinforcement learning for sequential attention decision of each image embedding to fully exploit the rich correlation cues among the unordered images. Moreover, we introduce its sample-efficient variant with off-policy experience replay to speed up the learning process. The pose-guided representation scheme can further boost the performance at the extremes of the pose variation.
Reinforcement learning (RL) trains an agent to interact (by trail and error) with a dynamic environment with the objective to maximize its accumulated reward. Recently, deep RL with convolutional neural networks (CNN) achieved human-level performance in Atari Games @cite_12 . The CNN is an ideal approximate function to address the infinite state space @cite_65 . There are two main streams to solve RL problems: methods based on value function and methods based on policy gradient. The first category, @math Q-learning, is the common solution for discrete action tasks @cite_12 . The second category can be efficient for continuous action space @cite_44 @cite_20 . There is also a hybrid actor-critic approach in which the parameterized policy is called an actor, and the learned value-function is called a critic @cite_42 @cite_53 . As it is essentially a policy gradient method, it can also be used for continuous action space @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_53", "@cite_42", "@cite_65", "@cite_44", "@cite_12", "@cite_20" ], "mid": [ "2745868649", "2950395671", "2260756217", "2121863487", "2165150801", "2145339207", "" ], "abstract": [ "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.", "We introduce a hybrid CPU GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU's computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at this https URL .", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.", "In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "" ] }
1907.03030
2964184973
This paper targets the problem of image set-based face verification and identification. Unlike traditional single media (an image or video) setting, we encounter a set of heterogeneous contents containing orderless images and videos. The importance of each image is usually considered either equal or based on their independent quality assessment. How to model the relationship of orderless images within a set remains a challenge. We address this problem by formulating it as a Markov Decision Process (MDP) in the latent space. Specifically, we first present a dependency-aware attention control (DAC) network, which resorts to actor-critic reinforcement learning for sequential attention decision of each image embedding to fully exploit the rich correlation cues among the unordered images. Moreover, we introduce its sample-efficient variant with off-policy experience replay to speed up the learning process. The pose-guided representation scheme can further boost the performance at the extremes of the pose variation.
Besides, policy-based and actor-critic methods have faster convergence characteristics than value-based methods @cite_2 , but they usually suffer from low sample-efficiency, high variance and often converge to local optima, since they typically learn via on-policy algorithms @cite_64 @cite_35 . Even the Asynchronous Advantage Actor-Critic @cite_42 @cite_53 also requires new samples to be collected for each gradient step on the policy. This quickly becomes extravagantly expensive, as the number of gradient steps to learn an effective policy increases with task complexity. Off-policy learning instead aims to reuse past experiences. This is not directly feasible with conventional policy gradient formulations, despite it relatively straightforward for value-based methods @cite_65 . Hence in this paper, we focus on combining the stability of actor-critic methods with the efficiency of off-policy RL, which capitalizes in recent advances on deep RL @cite_42 , especially off-policy algorithms @cite_60 @cite_45 .
{ "cite_N": [ "@cite_35", "@cite_64", "@cite_60", "@cite_53", "@cite_42", "@cite_65", "@cite_45", "@cite_2" ], "mid": [ "1191599655", "2119717200", "2949608212", "2950395671", "2260756217", "2121863487", "2556958149", "2155027007" ], "abstract": [ "Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.", "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.", "We introduce a hybrid CPU GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU's computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at this https URL .", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.", "This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.", "Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy." ] }
1907.02884
2954108589
Intent Detection and Slot Filling are two pillar tasks in Spoken Natural Language Understanding. Common approaches adopt joint Deep Learning architectures in attention-based recurrent frameworks. In this work, we aim at exploiting the success of "recurrence-less" models for these tasks. We introduce Bert-Joint, i.e., a multi-lingual joint text classification and sequence labeling framework. The experimental evaluation over two well-known English benchmarks demonstrates the strong performances that can be obtained with this model, even when few annotated data is available. Moreover, we annotated a new dataset for the Italian language, and we observed similar performances without the need for changing the model.
The SF task is addressed through supervised sequence labeling approaches, e.g., MEMMs @cite_7 , CRF @cite_9 or, again, with Deep Learning, such as Recurrent Neural Networks (RNNs) @cite_4 . Deep learning research started as extensions of Deep Neural Networks and DBNs (e.g., @cite_3 ) and is sometimes merged with Conditional Random Fields @cite_24 . Later, mesnil2015 proposes models based on recurrent neural networks (RNNs). On the same line of research is the work of @cite_10 , which, uses RNNs but introduces label dependencies by feeding previous output labels. chen2016 address the error propagation problem in a multi-turn scenario by means of an End-to-End Memory Network @cite_27 specifically designed to model the knowledge carryover.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_3", "@cite_24", "@cite_27", "@cite_10" ], "mid": [ "", "1934019294", "2166293310", "2395389931", "2094472029", "2951008357", "2137871902" ], "abstract": [ "", "Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ’s.", "Spoken Language Understanding (SLU) for conversational systems (SDS) aims at extracting concept and their relations from spontaneous speech. Previous approaches to SLU have modeled concept relations as stochastic semantic networks ranging from generative approach to discriminative. As spoken dialog systems complexity increases, SLU needs to perform understanding based on a richer set of features ranging from a-priori knowledge, long dependency, dialog history, system belief, etc. This paper studies generative and discriminative approaches to modeling the sentence segmentation and concept labeling. We evaluate algorithms based on Finite State Transducers (FST) as well as discriminative algorithms based on Support Vector Machine sequence classifier based and Conditional Random Fields (CRF). We compare them in terms of concept accuracy, generalization and robustness to annotation ambiguities. We also show how non-local non-lexical features (e.g. a-priori knowledge) can be modeled with CRF which is the best performing algorithm across tasks. The evaluation is carried out on two SLU tasks of different complexity, namely ATIS and MEDIA corpora.", "This paper investigates the use of deep belief networks (DBN) for semantic tagging, a sequence classification task, in spoken language understanding (SLU). We evaluate the performance of the DBN based sequence tagger on the well-studied ATIS task and compare our technique to conditional random fields (CRF), a state-of-the-art classifier for sequence classification. In conjunction with lexical and named entity features, we also use dependency parser based syntactic features and part of speech (POS) tags [1]. Under both noisy conditions (output of automatic speech recognition system) and clean conditions (manual transcriptions), our deep belief network based sequence tagger outperforms the best CRF based system described in [1] by an absolute 2 and 1 F-measure, respectively.Upon carrying out an analysis of cases where CRF and DBN models made different predictions, we observed that when discrete features are projected onto a continuous space during neural network training, the model learns to cluster these features leading to its improved generalization capability, relative to a CRF model, especially in cases where some features are either missing or noisy.", "We describe a joint model for intent detection and slot filling based on convolutional neural networks (CNN). The proposed architecture can be perceived as a neural network (NN) version of the triangular CRF model (TriCRF), in which the intent label and the slot sequence are modeled jointly and their dependencies are exploited. Our slot filling component is a globally normalized CRF style model, as opposed to left-to-right models in recent NN based slot taggers. Its features are automatically extracted through CNN layers and shared by the intent model. We show that our slot model component generates state-of-the-art results, outperforming CRF significantly. Our joint model outperforms the standard TriCRF by 1 absolute for both intent and slot. On a number of other domains, our joint model achieves 0.7-1 , and 0.9-2.1 absolute gains over the independent modeling approach for intent and slot respectively.", "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2 in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5 in the Entertainment domain, and 6.7 for the movies domain." ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
An open question after the Challenge was to prove dual bounds for the large size instances of the competition. (2013) furnished the first dual bounds @cite_2 using dual heuristics, the bounds proven in @cite_28 @cite_27 improved these last bounds.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_2" ], "mid": [ "2567238128", "2807125841", "166079659" ], "abstract": [ "", "The EURO ROADEF 2010 Challenge aimed to schedule the maintenance and refueling operations of French nuclear power plants, ranking the approaches in competition for the quality of primal solutions. This paper justifies the high quality of the best solutions computing dual bounds with dual heuristics. A first step designs several Mixed Integer Programming (MIP) relaxations with different compromises between computation time and quality of dual bounds. To deal with smaller MIPs, we prove how reductions in the number of time steps and scenarios can guarantee dual bounds for the whole problem of the Challenge. Several sets of dual bounds are computable, improving significantly the former best dual bounds of the literature. Intermediate results allow also a better understanding of the problem and offer perspectives to improve some approaches of the Challenge.", "This paper addresses a large-scale power plant maintenance scheduling and production planning problem, which has been proposed by the ROADEF EURO Challenge 2010. We develop two lower bounds for the problem: a greedy heuristic and a flow network for which a minimum cost flow problem has to be solved. Furthermore, we present a solution approach that combines a constraint programming formulation of the problem with several heuristics. The problem is decomposed into an outage scheduling and a production planning phase. The first phase is solved by a constraint program, which additionally ensures the feasibility of the remaining problem. In the second phase we utilize a greedy heuristic--developed from our greedy lower bound--to assign production levels and refueling amounts for a given outage schedule. All proposed strategies are shown to be competitive in an experimental evaluation." ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
Two matheuristic approaches were designed for the ROADEF Challenge. A simplified MIP problem is solved, and the solution is repaired to ensure the feasibility for all the constraints and computing the cost in the full model. In both cases, the MIP model is solved heuristically using truncated exact methods, the simplified MIP being too difficult to be solved in one hour. We note that (2010) provided an exact and compact MIP formulation for the full problem, introducing binaries for all time steps, cycle and production mode @math @cite_8 . This size of problem is not reasonable for a B &B search, even in a truncated mode, this work was a preliminary work before a Column Generation (CG) work similar with @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2059695025", "1583833469" ], "abstract": [ "This paper presents a heuristic method based on column generation for the EDF (Electricite De France) long-term electricity production planning problem proposed as subject of the ROADEF EURO 2010 Challenge. This is to our knowledge the first-ranked method among those methods based on mathematical programming, and was ranked fourth overall. The problem consists in determining a production plan over the whole time horizon for each thermal power plant of the French electricity company, and for nuclear plants, a schedule of plant outages which are necessary for refueling and maintenance operations. The average cost of the overall outage and production planning, computed over a set of demand scenarios, is to be minimized. The method proceeds in two stages. In the first stage, dates for outages are fixed once for all for each nuclear plant. Data are aggregated with a single average scenario and reduced time steps, and a set-partitioning reformulation of this aggregated problem is solved for fixing outage dates with a heuristic based on column generation. The pricing problem associated with each nuclear plant is a shortest path problem in an appropriately constructed graph. In the second stage, the reload level is determined at each date of an outage, considering now all scenarios. Finally, the production quantities between two outages are optimized for each plant and each scenario by solving independent linear programming problems.", "Le probleme de placement sur deux dimensions consiste a decider s'il existe un rangement d'objets rectangulaires dans une boite donnee. C'est un probleme combinatoire difficile (a la complexite du respect des capacites s'ajoute celle du positionnement des objets). Nous considerons les variantes sans rotation des objets et avec ou sans optimisation de la valeur des objets places. Nous menons une etude exploratoire des methodologies qui peuvent etre developpees a l'interface de la programmation mathematique, de l'optimisation combinatoire et de la theorie des graphes. Nous comparons les formulations de la litterature et en proposons de nouvelles. Nous developpons et testons deux approches de resolution innovantes. L'une est basee sur la decomposition de Dantzig-Wolfe (avec un branchement sur les contraintes disjonctives de non recouvrement des objets). L'autre constitue en une approche combinatoire basee sur diverses caracterisations des graphes d'intervalles (modelisant le chevauchement des objets selon chaque axe)." ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
(2013) considered an exact formulation of CT6 constraints, in a CG approach dualizing coupling constraints among units @cite_6 , i.e. CT1 demands and CT14 to CT21 scheduling constraints. The MIP considered a unique scenario, the average one, with production time steps aggregated weekly. The CG approach is deployed to compute a LP relaxation relaxing scheduling constraints CT14-CT21, with subproblems solved by dynamic programming. The further CG heuristic incorporate these scheduling constraints in the integer resolution with the columns generated. The solution of this MIP gives the outage dates and week where the CT6 constraints are activated, the final cost and feasibility issues for the whole problem are given computing the production by Linear Programming (LP). This approach was one of the most effective for the challenge, up to $2
{ "cite_N": [ "@cite_6" ], "mid": [ "2059695025" ], "abstract": [ "This paper presents a heuristic method based on column generation for the EDF (Electricite De France) long-term electricity production planning problem proposed as subject of the ROADEF EURO 2010 Challenge. This is to our knowledge the first-ranked method among those methods based on mathematical programming, and was ranked fourth overall. The problem consists in determining a production plan over the whole time horizon for each thermal power plant of the French electricity company, and for nuclear plants, a schedule of plant outages which are necessary for refueling and maintenance operations. The average cost of the overall outage and production planning, computed over a set of demand scenarios, is to be minimized. The method proceeds in two stages. In the first stage, dates for outages are fixed once for all for each nuclear plant. Data are aggregated with a single average scenario and reduced time steps, and a set-partitioning reformulation of this aggregated problem is solved for fixing outage dates with a heuristic based on column generation. The pricing problem associated with each nuclear plant is a shortest path problem in an appropriately constructed graph. In the second stage, the reload level is determined at each date of an outage, considering now all scenarios. Finally, the production quantities between two outages are optimized for each plant and each scenario by solving independent linear programming problems." ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
(2013) relaxed fully the constraints CT6 and CT12, leading to a MIP formulation with binaries only for the outage decisions @cite_21 . The production time steps were aggregated to weeks for size reasons. The stochastic scenarios were not aggregated, leading to 2-stage stochastic programming structure solved by Bender's decomposition. The master problem concerns the dates of outages and the refueling quantities. Independent sub-problems are defined for each stochastic scenarios with continuous variables for productions and fuel levels. The heuristic of @cite_21 computes first the LP relaxation exactly with the Bender's decomposition algorithm. Then, a cut &branch approach repairs integrity, branching on binary variables without adding new Bender's cuts. The resulting heuristic approach was efficient for the small dataset of the qualification, difficulties and inefficiencies occur for the final instances of the competition.
{ "cite_N": [ "@cite_21" ], "mid": [ "2055066666" ], "abstract": [ "This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need to be regularly taken down for refueling and maintenance, in such a way that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed integer programming model. We present a two phase approach that first uses Benders decomposition to solve the linear programming relaxation of a relaxed version of the problem. In the second phase, integer solutions are enumerated and a procedure is applied to make them satisfy constraints not included in the relaxed problem. To cope with the size of the formulations arising in our approach we describe efficient preprocessing techniques to reduce the problem size and show how aggregation can be applied to each of the subproblems. Computational results on the test instances show that the procedure competes well on small instances of the problem, but runs into difficulty on larger ones. Unlike heuristic approaches, however, this methodology can be used to provide lower bounds on solution quality." ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
A natural idea to solve the problem by decomposition is to follow the 2-stage structure of stochastic programming, distinguishing the high-level maintenance and refueling problem of T2 units from the lower level production problems, as in @cite_18 @cite_19 @cite_2 @cite_23 @cite_25 @cite_3 . The high-level problem fixes the maintenance dates and the refueling levels modifying slightly the current solution and respecting scheduling constraints CT7-CT11 and CT13-CT21. The previous maintenance planning of the last iteration (or the initial planning) is slightly modified in a defined neighborhood, where the feasibility of constraints CT13-CT21 can be ensured using MIP of Constraint Programming models. The low-level subproblems compute independently for all scenario the production plans optimizing the production costs and fulfilling the constraints CT1-CT6 and CT12 having fuel levels and maintenance dates fixed. Such production problems can be solved using greedy strategies following increasing production costs or using LP problems.
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_19", "@cite_23", "@cite_2", "@cite_25" ], "mid": [ "2178167101", "2036434250", "2501937168", "1963524746", "166079659", "" ], "abstract": [ "The demand for electrical energy is globally growing very quickly. For this reason, the optimization of power plant productions and power plant maintenance scheduling have become important research topics. A Large Scale Energy Management (LSEM) problem is studied in this paper. Two types of power plants are considered: power plants of type 1 can be refueled while still operating. Power plants of type 2 need to be shut down from time to time, for refueling and ordinary maintenance (these are typically nuclear plants). Considering these two types of power plants, LSEM is the problem of optimizing production plans and scheduling of maintenances of type 2 plants, with the objective of keeping the production cost as low as possible, while fulfilling the customers demand. Uncertainty about the customers demand is taken into account in the model considered. In this article, a matheuristic optimization approach based on problem decomposition is proposed. The approach involves mixed integer linear programming and simulated annealing optimization methods. Computational results on some realistic instances are presented.", "We address the problem of planning outages of nuclear power plants submitted by EDF (Electricite De France) as the challenge EURO ROADEF 2010. As our team won the first prize of the contest in the senior category, our approach may be of interest: it is conceptually simple, easy to program and computationally relatively fast. We present both our method and some ideas to improve it.", "", "This paper presents a heuristic approach combining constraint satisfaction, local search and a constructive optimization algorithm for a large-scale energy management and maintenance scheduling problem. The methodology shows how to successfully combine and orchestrate different types of algorithms and to produce competitive results. We also propose an efficient way to scale the method for huge instances. A large part of the presented work was done to compete in the ROADEF EURO Challenge 2010, organized jointly by the ROADEF, EURO and Electricite de France. The numerical results obtained on official competition instances testify about the quality of the approach. The method achieves 3 out of 15 possible best results.", "This paper addresses a large-scale power plant maintenance scheduling and production planning problem, which has been proposed by the ROADEF EURO Challenge 2010. We develop two lower bounds for the problem: a greedy heuristic and a flow network for which a minimum cost flow problem has to be solved. Furthermore, we present a solution approach that combines a constraint programming formulation of the problem with several heuristics. The problem is decomposed into an outage scheduling and a production planning phase. The first phase is solved by a constraint program, which additionally ensures the feasibility of the remaining problem. In the second phase we utilize a greedy heuristic--developed from our greedy lower bound--to assign production levels and refueling amounts for a given outage schedule. All proposed strategies are shown to be competitive in an experimental evaluation.", "" ] }
1812.08598
2905004272
Scheduling the maintenances of nuclear power plants is a complex optimization problem, formulated in 2-stage stochastic programming for the EURO ROADEF 2010 challenge. The first level optimizes the maintenance dates and refueling decisions. The second level optimizes the production to fulfill the power demands and to ensure feasibility and costs of the first stage decisions. This paper solves a deterministic version of the problem, studying Mixed Integer Programming (MIP) formulations and matheuristics. Relaxing only two sets of constraints of the ROADEF challenge, a MIP formulation can be written using only binary variables for the maintenance dates. The MIP formulations are used to design constructive matheuristics and a Variable Neighborhood Descent (VND) local search. These matheuristics produce very high quality solutions. Some intermediate results explains results of the Challenge: the relaxation of constraints CT6 are justified and neighborhood analyses with MIP-VND justifies the choice of neighborhoods to implement for the problem. Lastly, an extension with stability costs for monthly reoptimization is considered, with efficient bi-objective matheuristics.
We note that the operational approach in the French Utility Company was in this scope before the Challenge, we refer to @cite_14 . The approaches in competition were not among the most efficient, which can be analyzed comparing with the solving characteristics of frontal local search approaches.
{ "cite_N": [ "@cite_14" ], "mid": [ "625673688" ], "abstract": [ "Les recherches presentees dans cette these portent sur la modelisation et la resolution de systemes de contraintes, en considerant aussi bien l'aspect theorique que l'aspect pratique. La partie theorique a comme objectif de proposer des methodes generiques qui exploitent des techniques de la Programmation Par Contraintes et de la Programmation Mathematique pour modeliser et resoudre des systemes de contraintes binaires. Nous avons propose une formulation lineaire agregee pour les CSP binaires et une methode de filtrage combinant la relaxation Lagrangienne et la consistance d'arc. Pour les CSP sur-contraints, nous avons introduit la notion d'inegalite binaire valide. Nous avons egalement montre comment exploiter cette notion pour ameliorer les bornes inferieures qui se basent sur la consistance d'arc et proposer de nouvelles bornes inferieures ainsi qu'une technique de pretraitement de WCSP. Dans la partie appliquee, nous avons traite le probleme de placement des arrets et de la production des reacteurs nuclaires d'Electricite de Fance(EDF). Nous avons ameliore la modelisation mathematique actuelle de certaines contraintes du probleme et nous avons propose une nouvelle modelisation en Programmation Par Contraintes pour tout le probleme. Nous avons, par la suite, concu le solveur OSOPAN pour la satisfaction et l'optimisation de ce probleme. Ce solveur fait cooperer la Programmation Par Contraintes, la Programmation Mathematique ainsi que la Recherche Locale." ] }
1812.08683
2904933001
In this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. We first use a class of penalized M-estimators for the propensity score and outcome models. We then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. Finally, the estimated propensity score is used to construct the inverse probability weighting estimator. We prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. More importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. We provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. In simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. We also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology.
In this subsection, we compare our method with the related work. First, we comment on the theoretical results of the AIPW estimator and double selection estimator when both the propensity score and outcome models are correctly specified. Second, we compare the results when one of the two models is misspecified. Finally, we consider the more recent work by @cite_29 , @cite_35 and @cite_33 @cite_14 .
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_14", "@cite_33" ], "mid": [ "2261710003", "2565113373", "2786512041", "2765948077" ], "abstract": [ "In observational studies, propensity scores are commonly estimated by maxi- mum likelihood but may fail to balance high-dimensional pre-treatment covariates even after specification search. We introduce a general framework that unifies and generalizes several recent proposals to improve covariate balance when designing an observational study. In- stead of the likelihood function, we propose to optimize special loss functions---covariate balancing scoring rules (CBSR)---to estimate the propensity score. A CBSR is uniquely determined by the link function in the GLM and the estimand (a weighted average treatment effect). We show CBSR does not lose asymptotic efficiency to the Bernoulli likelihood in estimating the weighted average treatment effect compared, but CBSR is much more robust in finite sample. Borrowing tools developed in statistical learning, we propose practical strategies to balance covariate functions in rich function classes. This is useful to estimate the maximum bias of the inverse probability weighting (IPW) estimators and construct honest confidence interval in finite sample. Lastly, we provide several numerical examples to demonstrate the trade-off of bias and variance in the IPW-type estimators and the trade-off in balancing different function classes of the covariates.", "There are many settings where researchers are interested in estimating average treatment effects and are willing to rely on the unconfoundedness assumption, which requires that the treatment assignment be as good as random conditional on pre-treatment variables. The unconfoundedness assumption is often more plausible if a large number of pre-treatment variables are included in the analysis, but this can worsen the performance of standard approaches to treatment effect estimation. In this paper, we develop a method for de-biasing penalized regression adjustments to allow sparse regression methods like the lasso to be used for sqrt n -consistent inference of average treatment effects in high-dimensional linear models. Given linearity, we do not need to assume that the treatment propensities are estimable, or that the average treatment effect is a sparse contrast of the outcome model parameters. Rather, in addition standard assumptions used to make lasso regression on the outcome model consistent under 1-norm error, we only require overlap, i.e., that the propensity score be uniformly bounded away from 0 and 1. Procedurally, our method combines balancing weights with a regularized regression adjustment.", "Consider the problem of estimating average treatment effects when a large number of covariates are used to adjust for possible confounding through outcome regression and propensity score models. The conventional approach of model building and fitting iteratively can be difficult to implement, depending on ad hoc choices of what variables are included. In addition, uncertainty from the iterative process of model selection is complicated and often ignored in subsequent inference about treatment effects. We develop new methods and theory to obtain not only doubly robust point estimators for average treatment effects, which remain consistent if either the propensity score model or the outcome regression model is correctly specified, but also model-assisted confidence intervals, which are valid when the propensity score model is correctly specified but the outcome regression model may be misspecified. With a linear outcome model, the confidence intervals are doubly robust, that is, being also valid when the outcome model is correctly specified but the propensity score model may be misspecified. Our methods involve regularized calibrated estimators with Lasso penalties, but carefully chosen loss functions, for fitting propensity score and outcome regression models. We provide high-dimensional analysis to establish the desired properties of our methods under comparable conditions to previous results, which give valid confidence intervals when both the propensity score and outcome regression are correctly specified. We present a simulation study and an empirical application which confirm the advantages of the proposed methods compared with related methods based on regularized maximum likelihood estimation.", "Propensity score methods are widely used for estimating treatment effects from observational studies. A popular approach is to estimate propensity scores by maximum likelihood based on logistic regression, and then apply inverse probability weighted estimators or extensions to estimate treatment effects. However, a challenging issue is that such inverse probability weighting methods including doubly robust methods can perform poorly even when the logistic model appears adequate as examined by conventional techniques. In addition, there is increasing difficulty to appropriately estimate propensity scores when dealing with a large number of covariates. To address these issues, we study calibrated estimation as an alternative to maximum likelihood estimation for fitting logistic propensity score models. We show that, with possible model misspecification, minimizing the expected calibration loss underlying the calibrated estimators involves reducing both the expected likelihood loss and a measure of relative errors which controls the mean squared errors of inverse probability weighted estimators. Furthermore, we propose a regularized calibrated estimator by minimizing the calibration loss with a Lasso penalty. We develop a novel Fisher scoring descent algorithm for computing the proposed estimator, and provide a high-dimensional analysis of the resulting inverse probability weighted estimators of population means, leveraging the control of relative errors for calibrated estimation. We present a simulation study and an empirical application to demonstrate the advantages of the proposed methods compared with maximum likelihood and regularization." ] }
1812.08683
2904933001
In this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. We first use a class of penalized M-estimators for the propensity score and outcome models. We then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. Finally, the estimated propensity score is used to construct the inverse probability weighting estimator. We prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. More importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. We provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. In simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. We also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology.
When both the propensity score and outcome models are correctly specified, and showed that the AIPW estimator is asymptotically normal and efficient in high dimension. Their assumptions and main results are parallel to our Theorem . However, the sample boundedness property in Remark does not hold for the AIPW estimator in general. We note that the above work and our Theorem can be viewed as an extension of the semiparametric efficiency property of the doubly robust estimators; see @cite_18 @cite_8 @cite_10 @cite_21 @cite_23 @cite_32 @cite_25 , among many others.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_32", "@cite_23", "@cite_10", "@cite_25" ], "mid": [ "2039811614", "2137370054", "2120817734", "2086102241", "1999188374", "2058499415", "" ], "abstract": [ "Abstract In applied problems it is common to specify a model for the conditional mean of a response given a set of regressors. A subset of the regressors may be missing for some study subjects either by design or happenstance. In this article we propose a new class of semiparametric estimators, based on inverse probability weighted estimating equations, that are consistent for parameter vector α0 of the conditional mean model when the data are missing at random in the sense of Rubin and the missingness probabilities are either known or can be parametrically modeled. We show that the asymptotic variance of the optimal estimator in our class attains the semiparametric variance bound for the model by first showing that our estimation problem is a special case of the general problem of parameter estimation in an arbitrary semiparametric model in which the data are missing at random and the probability of observing complete data is bounded away from 0, and then deriving a representation for the efficient score...", "Summary The goal of this article is to construct doubly robust (DR) estimators in ignorable missing data and causal inference models. In a missing data model, an estimator is DR if it remains consistent when either (but not necessarily both) a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified. Because with observational data one can never be sure that either a missingness model or a complete data model is correct, perhaps the best that can be hoped for is to find a DR estimator. DR estimators, in contrast to standard likelihood-based or (nonaugmented) inverse probability-weighted estimators, give the analyst two chances, instead of only one, to make a valid inference. In a causal inference model, an estimator is DR if it remains consistent when either a model for the treatment assignment mechanism or a model for the distribution of the counterfactual data is correctly specified. Because with observational data one can never be sure that a model for the treatment assignment mechanism or a model for the counterfactual data is correct, inference based on DR estimators should improve upon previous approaches. Indeed, we present the results of simulation studies which demonstrate that the finite sample performance of DR estimators is as impressive as theory would predict. The proposed method is applied to a cardiovascular clinical trial.", "Weighting methods that adjust for observed covariates, such as inverse probability weighting, are widely used for causal inference and estimation with incomplete outcome data. Part of the appeal of such methods is that one set of weights can be used to estimate a range of treatment effects based on different outcomes, or a variety of population means for several variables. However, this appeal can be diminished in practice by the instability of the estimated weights and by the difficulty of adequately adjusting for observed covariates in some settings. To address these limitations, this article presents a new weighting method that finds the weights of minimum variance that adjust or balance the empirical distribution of the observed covariates up to levels prespecified by the researcher. This method allows the researcher to balance very precisely the means of the observed covariates and other features of their marginal and joint distributions, such as variances and correlations and also, for example, the ...", "This paper proposes entropy balancing, a data preprocessing method to achieve covariate balance in observational studies with binary treatments. Entropy balancing relies on a maximum entropy reweighting scheme that calibrates unit weights so that the reweighted treatment and control group satisfy a potentially large set of prespecified balance conditions that incorporate information about known sample moments. Entropy balancing thereby exactly adjusts inequalities in representation with respect to the first, second, and possibly higher moments of the covariate distributions. These balance improvements can reduce model dependence for the subsequent estimation of treatment effects. The method assures that balance improves on all covariate moments included in the reweighting. It also obviates the need for continual balance checking and iterative searching over propensity score models that may stochastically balance the covariate moments. We demonstrate the use of entropy balancing with Monte Carlo simulations and empirical applications.", "Consider estimating the mean of an outcome in the presence of missing data or estimating population average treatment effects in causal inference. A doubly robust estimator remains consistent if an outcome regression model or a propensity score model is correctly specified. We build on a previous nonparametric likelihood approach and propose new doubly robust estimators, which have desirable properties in efficiency if the propensity score model is correctly specified, and in boundedness even if the inverse probability weights are highly variable. We compare the new and existing estimators in a simulation study and find that the robustified likelihood estimators yield overall the smallest mean squared errors. Copyright 2010, Oxford University Press.", "Comment on Performance of Double-Robust Estimators When Inverse Probability'' Weights Are Highly Variable'' [arXiv:0804.2958]", "" ] }
1812.08683
2904933001
In this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. We first use a class of penalized M-estimators for the propensity score and outcome models. We then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. Finally, the estimated propensity score is used to construct the inverse probability weighting estimator. We prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. More importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. We provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. In simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. We also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology.
When either the propensity score model or the outcome model is misspecified, Propositions and provide a complete characterization of the asymptotic behavior of our estimator. In the same context, @cite_3 proved that the AIPW estimator is consistent, but Theorem 2 of that work does not yield an explicit convergence rate. In fact, we show in the supplementary material that the AIPW estimator has the same convergence rate as in , which is slower than @math , and thus the confidence intervals for the treatment effect are not available under model misspecification. In contrast, our estimator is root- @math consistent, which leads to honest confidence intervals as shown in Sections and . Indeed, this robustness of the asymptotic distributions to model specification is the main advantage over the AIPW estimators and the double selection estimator .
{ "cite_N": [ "@cite_3" ], "mid": [ "2100532505" ], "abstract": [ "This paper concerns robust inference on average treatment effects following model selection. Under selection on observables, we construct confidence intervals using a doubly-robust estimator that are robust to model selection errors and prove their uniform validity over a large class of models that allows for multivalued treatments with heterogeneous effects and selection amongst (possibly) more covariates than observations. The semiparametric efficiency bound is attained under appropriate conditions. Precise conditions are given for any model selector to yield these results, and we specifically propose the group lasso, which is apt for treatment effects, and derive new results for high-dimensional, sparse multinomial logistic regression. Both a simulation study and revisiting the National Supported Work demonstration show our estimator performs well in finite samples." ] }
1812.08683
2904933001
In this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. We first use a class of penalized M-estimators for the propensity score and outcome models. We then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. Finally, the estimated propensity score is used to construct the inverse probability weighting estimator. We prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. More importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. We provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. In simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. We also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology.
In another recent work, @cite_35 proposed a generalized covariate balancing method based on a class of scoring rules. Many existing covariate balancing estimators can be treated as the primal or dual problems of their optimization problem. @cite_35 studied the robustness of these estimators to misspecified propensity score models under the constant treatment effect model @math for some constant @math . In contrast, our methodology allows for the heterogeneity of causal effects. In addition, while our work mainly focuses on the high-dimensional settings, @cite_35 does not provide statistical guarantees in such settings.
{ "cite_N": [ "@cite_35" ], "mid": [ "2261710003" ], "abstract": [ "In observational studies, propensity scores are commonly estimated by maxi- mum likelihood but may fail to balance high-dimensional pre-treatment covariates even after specification search. We introduce a general framework that unifies and generalizes several recent proposals to improve covariate balance when designing an observational study. In- stead of the likelihood function, we propose to optimize special loss functions---covariate balancing scoring rules (CBSR)---to estimate the propensity score. A CBSR is uniquely determined by the link function in the GLM and the estimand (a weighted average treatment effect). We show CBSR does not lose asymptotic efficiency to the Bernoulli likelihood in estimating the weighted average treatment effect compared, but CBSR is much more robust in finite sample. Borrowing tools developed in statistical learning, we propose practical strategies to balance covariate functions in rich function classes. This is useful to estimate the maximum bias of the inverse probability weighting (IPW) estimators and construct honest confidence interval in finite sample. Lastly, we provide several numerical examples to demonstrate the trade-off of bias and variance in the IPW-type estimators and the trade-off in balancing different function classes of the covariates." ] }
1812.08683
2904933001
In this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. We first use a class of penalized M-estimators for the propensity score and outcome models. We then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. Finally, the estimated propensity score is used to construct the inverse probability weighting estimator. We prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. More importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. We provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. In simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. We also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology.
Most recently, @cite_33 @cite_14 proposed a penalized calibrated propensity score method and studied its robustness to model misspecification. Our work is closely related to @cite_33 , which can be seen as equivalent to directly plugging the initial estimator @math into the Horvitz-Thompson estimator with @math . However, this method does not balance the covariates as we did in Step 3. Corollary 3 of @cite_33 implies that the estimator has the rate of the convergence @math , which is slower than that of our estimator. In our proof, one can treat @math as the bias" of the Horvitz-Thompson estimator, which is eliminated by the covariate balancing step, whereas this term remains in @cite_33 . In the followup paper, @cite_14 removed this bias by constructing an AIPW estimator so that the resulting estimator is robust to model misspecification. However, our result is more general than that of @cite_14 . First, our Theorem , and Propositions and show that there exists a large class of estimators that is asymptotically normal under possible model misspecification. Second, our theory holds for generalized linear models as shown in Theorem , whereas @cite_14 's method is invalid if the propensity score model is misspecified.
{ "cite_N": [ "@cite_14", "@cite_33" ], "mid": [ "2786512041", "2765948077" ], "abstract": [ "Consider the problem of estimating average treatment effects when a large number of covariates are used to adjust for possible confounding through outcome regression and propensity score models. The conventional approach of model building and fitting iteratively can be difficult to implement, depending on ad hoc choices of what variables are included. In addition, uncertainty from the iterative process of model selection is complicated and often ignored in subsequent inference about treatment effects. We develop new methods and theory to obtain not only doubly robust point estimators for average treatment effects, which remain consistent if either the propensity score model or the outcome regression model is correctly specified, but also model-assisted confidence intervals, which are valid when the propensity score model is correctly specified but the outcome regression model may be misspecified. With a linear outcome model, the confidence intervals are doubly robust, that is, being also valid when the outcome model is correctly specified but the propensity score model may be misspecified. Our methods involve regularized calibrated estimators with Lasso penalties, but carefully chosen loss functions, for fitting propensity score and outcome regression models. We provide high-dimensional analysis to establish the desired properties of our methods under comparable conditions to previous results, which give valid confidence intervals when both the propensity score and outcome regression are correctly specified. We present a simulation study and an empirical application which confirm the advantages of the proposed methods compared with related methods based on regularized maximum likelihood estimation.", "Propensity score methods are widely used for estimating treatment effects from observational studies. A popular approach is to estimate propensity scores by maximum likelihood based on logistic regression, and then apply inverse probability weighted estimators or extensions to estimate treatment effects. However, a challenging issue is that such inverse probability weighting methods including doubly robust methods can perform poorly even when the logistic model appears adequate as examined by conventional techniques. In addition, there is increasing difficulty to appropriately estimate propensity scores when dealing with a large number of covariates. To address these issues, we study calibrated estimation as an alternative to maximum likelihood estimation for fitting logistic propensity score models. We show that, with possible model misspecification, minimizing the expected calibration loss underlying the calibrated estimators involves reducing both the expected likelihood loss and a measure of relative errors which controls the mean squared errors of inverse probability weighted estimators. Furthermore, we propose a regularized calibrated estimator by minimizing the calibration loss with a Lasso penalty. We develop a novel Fisher scoring descent algorithm for computing the proposed estimator, and provide a high-dimensional analysis of the resulting inverse probability weighted estimators of population means, leveraging the control of relative errors for calibrated estimation. We present a simulation study and an empirical application to demonstrate the advantages of the proposed methods compared with maximum likelihood and regularization." ] }
1907.02480
2955607373
The always increasing mobile connectivity affects every aspect of our daily lives, including how and when we keep ourselves informed and consult news media. By studying mobile web data, provided by one of the major Chilean telecommunication companies, we investigate how different cohorts of the population of Santiago De Chile consume news media content through their smartphones. We address the issue of inequalities in the access to information, trying to understand to what extent socio-demographic factors impact the preferences and habits of the users.
Chilean news media have been studied to outline for example their ownership structures or their political bias and to understand the effects of certain press manipulations by owners of content shown @cite_15 . These results are based on hypotheses born out of an operationalization of Herman and Chomsky's Propaganda Model @cite_4 , whose so-called filters'' provide an assessment of how the media behave. One particular hypothesis is that the media will manipulate content to target news to appeal to a certain audience. This has been explored also using Twitter data @cite_7 . What we study here is a specialization of that study; namely, how people of different socio-economic backgrounds access news media information using their mobile phones, also with an insight on particular news outlets, at the finest possible level of granularity.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_7" ], "mid": [ "2766427402", "", "2793539342" ], "abstract": [ "CONICYT 63130228 Movistar - Telefonica Chile Chilean government initiative CORFO 13CEE2-21592 (2013-21592-1-INNOVA PRODUCCION2013-21592-1) Conicyt's Proyecto de Informacion Cientifica, Pluralismo en el Sistema Informativo Nacional PLU140001 Millenium Institute for Foundational Research on Data", "", "News consumers expect news outlets to be objective and balanced in their reports of events and opinions. However, there is a growing body of evidence of bias in the media caused by underlying political and socio-economic viewpoints. Previous studies have tried to classify the partiality of the media, but there is little work on quantifying it, and less still on the nature of this partiality. The vast amount of content published in social media enables us to quantify the inclination of the press to pre-defined sides of the socio-political spectrum. To describe such tendencies, we use tweets to automatically compute a news outlet’s political and socio-economic orientation. Results show that the media have a measurable bias, and illustrate this by showing the favoritism of Chilean media for the ruling political parties in the country. This favoritism becomes clearer as we empirically observe a shift in the position of the mass media when there is a change in government. Even though relative differences in bias between news outlets can be observed, public awareness of the bias of the media landscape as a whole appears to be limited by the political space defined by the news that we receive as a population. We found that the nature of the bias is reflected in the vocabulary used and the entities mentioned by different news outlets. A survey conducted among news consumers confirms that media bias has an impact on the coverage of controversial topics and that this is perceivable by the general audience. Having a more accurate method to measure and characterize media bias will help readers position outlets in the socio-economic landscape, even when a (sometimes opposite) self-declared position is stated. This will empower readers to better reflect on the content provided by their news outlets of choice." ] }
1907.02511
2956103957
In linear inverse problems, the goal is to recover a target signal from undersampled, incomplete or noisy linear measurements. Typically, the recovery relies on complex numerical optimization methods; recent approaches perform an unfolding of a numerical algorithm into a neural network form, resulting in a substantial reduction of the computational complexity. In this paper, we consider the recovery of a target signal with the aid of a correlated signal, the so-called side information (SI), and propose a deep unfolding model that incorporates SI. The proposed model is used to learn coupled representations of correlated signals from different modalities, enabling the recovery of multimodal data at a low computational cost. As such, our work introduces the first deep unfolding method with SI, which actually comes from a different modality. We apply our model to reconstruct near-infrared images from undersampled measurements given RGB images as SI. Experimental results demonstrate the superior performance of the proposed framework against single-modal deep learning methods that do not use SI, multimodal deep learning designs, and optimization algorithms.
A common approach for solving problems of the form with sparsity constraints is convex optimization @cite_17 . Let us assume that the unknown @math has a sparse representation @math with respect to a dictionary @math , @math , that is, @math . Then, takes the form and a solution can be obtained via the formulation of the @math minimization problem: where @math denotes the @math -norm ( @math ), which promotes sparse solutions and @math is a regularization parameter.
{ "cite_N": [ "@cite_17" ], "mid": [ "2078204800" ], "abstract": [ "The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries---stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis pursuit (BP) is a principle for decomposing a signal into an \"optimal\"' superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear and quadratic programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver." ] }
1907.02511
2956103957
In linear inverse problems, the goal is to recover a target signal from undersampled, incomplete or noisy linear measurements. Typically, the recovery relies on complex numerical optimization methods; recent approaches perform an unfolding of a numerical algorithm into a neural network form, resulting in a substantial reduction of the computational complexity. In this paper, we consider the recovery of a target signal with the aid of a correlated signal, the so-called side information (SI), and propose a deep unfolding model that incorporates SI. The proposed model is used to learn coupled representations of correlated signals from different modalities, enabling the recovery of multimodal data at a low computational cost. As such, our work introduces the first deep unfolding method with SI, which actually comes from a different modality. We apply our model to reconstruct near-infrared images from undersampled measurements given RGB images as SI. Experimental results demonstrate the superior performance of the proposed framework against single-modal deep learning methods that do not use SI, multimodal deep learning designs, and optimization algorithms.
Numerical methods @cite_10 proposed to solve include pivoting algorithms, interior-point methods, gradient based methods and message passing algorithms (AMP) @cite_4 . Among gradient based methods, proximal methods are tailored to optimize an objective of the form where @math is a convex differentiable function with a Lipschitz-continuous gradient, and @math is convex and possibly nonsmooth @cite_0 , @cite_28 . Their main step involves the proximal operator, defined for a function @math according to with @math and @math an upper bound on the Lipschitz constant of @math . A popular proximal algorithm is the Iterative Soft Thresholding Algorithm (ISTA) @cite_31 @cite_13 . Let us set @math , @math in . At the @math -th iteration ISTA computes: where @math denotes the proximal operator [Figure (a)] expressed by the component-wise shrinkage function: with @math .
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_28", "@cite_0", "@cite_31", "@cite_10" ], "mid": [ "", "2166670884", "2093545205", "1946620893", "2115706991", "2118297240" ], "abstract": [ "", "We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang.", "Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. The goal of this monograph is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.", "The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.", "We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.", "The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications." ] }
1907.02584
2953945697
We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for @math models.
The problem of local, instance level model explanations for classification can be approached from various angles. Feature attribution methods assign importance to each input feature for a given prediction. Attribution methods can be fully model agnostic @cite_18 @cite_31 or require knowledge of the architecture of the underlying model @cite_10 @cite_3 @cite_2 . Alternatively, we can also assess the impact of individual training data instances on a specific prediction by using influence functions @cite_12 @cite_14 @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_3", "@cite_2", "@cite_31", "@cite_10", "@cite_12" ], "mid": [ "2282821441", "2947285452", "2964330603", "2195388612", "2773497437", "2962862931", "1787224781", "2597603852" ], "abstract": [ "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "Influence functions estimate the effect of removing particular training points on a model without needing to retrain it. They are based on a first-order approximation that is accurate for small changes in the model, and so are commonly used for studying the effect of individual points in large datasets. However, we often want to study the effects of large groups of training points, e.g., to diagnose batch effect or apportion credit between different data sources. Removing such large groups can result in significant changes to the model. Are influence functions still accurate in this setting? In this paper, we find that across many different types of groups and in a range of real-world datasets, the influence of a group correlates surprisingly well with its actual effect, even if the absolute and relative error can be large. Our theoretical analysis shows that such correlation arises under certain settings but need not hold in general, indicating that real-world datasets have particular properties that keep the influence approximation well-behaved.", "Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask which training examples are most responsible for a given set of predictions''? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.", "Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets. HighlightsA novel method to explain nonlinear classification decisions in terms of input variables is introduced.The method is based on Taylor expansions and decomposes the output of a deep neural network in terms of input variables.The resulting deep Taylor decomposition can be applied directly to existing neural networks without retraining.The method is tested on two large-scale neural networks for image classification: BVLC CaffeNet and GoogleNet.", "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and or better consistency with human intuition than previous approaches.", "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.", "How can we explain the predictions of a black-box model? In this paper, we use influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks." ] }
1907.02584
2953945697
We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for @math models.
Another approach is to determine which features should remain the same so the prediction does not change. These unchanged features can be translated into called @cite_32 . Anchors are complementary to counterfactual reasoning and concepts from both approaches have been combined in the form of which consist of and @cite_34 @cite_9 . Similar to Anchors, Pertinent Positives detect the minimal and sufficient subset of features that are needed to leave the prediction unchanged. Pertinent Negatives on the other hand find feature values that should be minimally and necessarily absent in order to keep the original prediction and resemble counterfactual reasoning. Contrastive Explanations rely on the concept of neutral background values for each feature, which are often difficult to obtain. @cite_21 tackle this issue by introducing learned monotonic attribute functions representing meaningful concepts. These high-level interpretable concepts can be learned either through labeled examples @cite_4 or in an unsupervised fashion via disentangled representations @cite_13 . In order to generate realistic Contrastive Explanations, the perturbed instance needs to lie on the training data manifold modeled by generative adversarial networks @cite_22 @cite_16 or variational autoencoders @cite_1 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_16", "@cite_34", "@cite_13" ], "mid": [ "2963483561", "2099471712", "2947794021", "2946940672", "", "", "2962760235", "2963276306", "2963366547" ], "abstract": [ "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), to generate contrastive explanations for classification model where one is able to query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on structured data [13]. Moreover, to obtain meaningful explanations we propose a principled approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of the different data types of this nature was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively and qualitatively validate our approach over 5 public datasets covering diverse domains.", "Explaining decisions of deep neural networks is a hot research topic with applications in medical imaging, video surveillance, and self driving cars. Many methods have been proposed in literature to explain these decisions by identifying relevance of different pixels. In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification. One of our key contributions is how we define \"addition\" for such rich data in a formal yet humanly interpretable way that leads to meaningful results. This was one of the open questions laid out in Dhurandhar this http URL. (2018) [5], which proposed a general framework for creating (local) contrastive explanations for deep models. We showcase the efficacy of our approach on CelebA and Fashion-MNIST in creating intuitive explanations that are also quantitatively superior compared with other state-of-the-art interpretability methods.", "", "", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.", "Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We evaluate the proposed approach using several quantitative metrics and empirically observe significant gains over existing methods in terms of both disentanglement and data likelihood (reconstruction quality)." ] }
1907.02584
2953945697
We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for @math models.
One of the key contributions of this paper is the use of prototypes to guide the counterfactual search process. @cite_23 @cite_27 use prototypes as example-based explanations to improve the interpretability of complex datasets. Besides improving interpretability, prototypes have a broad range of applications like clustering @cite_6 , classification @cite_30 @cite_0 , and few-shot learning @cite_28 . If we have access to an encoder @cite_20 , we follow the approach of @cite_28 who define a class prototype as the mean encoding of the instances which belong to that class. In the absence of an encoder, we find prototypes through class specific k-d trees @cite_29 .
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_29", "@cite_6", "@cite_0", "@cite_27", "@cite_23", "@cite_20" ], "mid": [ "2084701050", "2601450892", "2165558283", "87092222", "1969719708", "2732351827", "2551974706", "2154642048" ], "abstract": [ "We discuss a method for selecting prototypes in the classification setting (in which the samples fall into known discrete categories). Our method of focus is derived from three basic properties that we believe a good prototype set should satisfy. This intuition is translated into a set cover optimization problem, which we solve approximately using standard approaches. While prototype selection is usually viewed as purely a means toward building an efficient classifier, in this paper we emphasize the inherent value of having a set of prototypical elements. That said, by using the nearest-neighbor rule on the set of prototypes, we can of course discuss our method as a classifier as well. We demonstrate the interpretative value of producing prototypes on the well-known USPS ZIP code digits data set and show that as a classifier it performs reasonably well. We apply the method to a proteomics data set in which the samples are strings and therefore not naturally embedded in a vector space. Our method is compatible with any dissimilarity measure, making it amenable to situations in which using a non-Euclidean metric is desirable or even necessary.", "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.", "This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1) k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t ) k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given.", "", "We propose a general framework for nonparametric classification of multi-dimensional numerical patterns. Given training points for each class, it builds a set cover with convex sets each of which contains some training points of the class but no points of the other classes. Each convex set has thus an associated class label, and classification of a query point is made to the class of the convex set such that the projection of the query point onto its boundary is minimal. In this sense, the convex sets of a class are regarded as ''prototypes'' for that class. We then apply this framework to two special types of convex sets, minimum enclosing balls and convex hulls, giving algorithms for constructing a set cover with them and for computing the projection length onto their boundaries. For convex hulls, we also give a method for implicitly evaluating whether a point is contained in a convex hull, which can avoid computational difficulty for explicit construction of convex hulls in high-dimensional space.", "In this paper we propose an efficient algorithm ProtoDash for selecting prototypical examples from complex datasets. Our work builds on top of the learn to criticize (L2C) work by (2016) and generalizes it to not only select prototypes for a given sparsity level @math but also to associate non-negative weights with each of them indicative of the importance of each prototype. Unlike in the case of L2C, this extension provides a single coherent framework under which both prototypes and criticisms (i.e. lowest weighted prototypes) can be found. Furthermore, our framework works for any symmetric positive definite kernel thus addressing one of the open questions laid out in (2016). Our additional requirement of learning non-negative weights introduces technical challenges as the objective is no longer submodular as in the previous work. However, we show that the problem is weakly submodular and derive approximation guarantees for our fast ProtoDash algorithm. Moreover, ProtoDash can not only find prototypical examples for a dataset @math , but it can also find (weighted) prototypical examples from @math that best represent another dataset @math , where @math and @math belong to the same feature space. We demonstrate the efficacy of our method on diverse domains namely; retail, digit recognition (MNIST) and on the latest publicly available 40 health questionnaires obtained from the Center for Disease Control (CDC) website maintained by the US Dept. of Health. We validate the results quantitatively as well as qualitatively based on expert feedback and recently published scientific studies on public health.", "Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need criticism to explain what are captured by prototypes. Motivated by the Bayesian model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the MMD-critic selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by MMD-critic via a nearest prototype classifier, showing competitive performance compared to baselines.", "This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion" ] }