aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.08736
2962812134
Most of privacy protection studies for textual data focus on removing explicit sensitive identifiers. However, personal writing style, as a strong indicator of the authorship, is often neglected. Recent studies on writing style anonymization can only output numeric vectors which are difficult for the recipients to interpret. We propose a novel text generation model with the exponential mechanism for authorship anonymization. By augmenting the semantic information through a REINFORCE training reward function, the model can generate differentially-private text that has a close semantic and similar grammatical structure to the original text while removing personal traits of the writing style. It does not assume any conditioned labels or paralleled text data for training. We evaluate the performance of the proposed model on the real-life peer reviews dataset and the Yelp review dataset. The result suggests that our model outperforms the state-of-the-art on semantic preservation, authorship obfuscation, and stylometric transformation.
Writing Style obfuscation studies try to hide the identity of the author. Anonymouth @cite_40 is a tool that utilizes to generate writing attributes. It gives users suggestions on which way they are able to anonymize their text according to two reference dataset. also propose a similar architecture to anonymize text, however, instead of directly changing the text, they all work on the term frequency vector, whose real-life utility is limited. Compared with semi-automatic methods that require users to make a decision, our approach directly learns from end to end.
{ "cite_N": [ "@cite_40" ], "mid": [ "160636586" ], "abstract": [ "This paper presents Anonymouth, a novel framework for anonymizing writing style. Without accounting for style, anonymous authors risk identification. This framework is necessary to provide a tool for testing the consistency of anonymized writing style and a mechanism for adaptive attacks against stylometry techniques. Our framework defines the steps necessary to anonymize documents and implements them. A key contribution of this work is this framework, including novel methods for identifying which features of documents need to change and how they must be changed to accomplish document anonymization. In our experiment, 80 of the user study participants were able to anonymize their documents in terms of a fixed corpus and limited feature set used. However, modifying pre-written documents were found to be difficult and the anonymization did not hold up to more extensive feature sets. It is important to note that Anonymouth is only the first step toward a tool to acheive stylometric anonymity with respect to state-of-the-art authorship attribution techniques. The topic needs further exploration in order to accomplish significant anonymity." ] }
1907.08895
2963587483
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
Convolutional neural networks have been demonstrated to achieve excellent results in video action understanding @cite_10 @cite_1 @cite_13 . Video should not be treated as a set of independent frames, since the connection between frames provides extra temporal information for understanding. Simonyan al @cite_43 propose the two-stream CNN approach for action recognition, which consists of two CNNs taking image and optical flow as input respectively. To avoid computing optical flow separately, Tran al @cite_47 propose 3D CNN for large scale action recognition. Hara al @cite_11 apply 3D convolution on ResNet structure. Carreira al @cite_44 propose I3D by extending Inception network from 2D to 3D and including an extra optical flow stream. Tran al @cite_0 and Xie al @cite_36 factorize 3D CNN to treat spatial and temporal information separately to reduce the computational cost while keeping the performance. However, to the best of our knowledge, we are the first ones to exploit 3D CNN for video object segmentation.
{ "cite_N": [ "@cite_13", "@cite_36", "@cite_1", "@cite_44", "@cite_43", "@cite_0", "@cite_47", "@cite_10", "@cite_11" ], "mid": [ "2761659801", "2751445731", "2963820951", "2962790054", "2883429621", "1923404803", "2953111739", "2949351114", "2772114784" ], "abstract": [ "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).", "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "Deep learning has been demonstrated to achieve excellent results for image classification and object detection. However, the impact of deep learning on video analysis has been limited due to complexity of video data and lack of annotations. Previous convolutional neural networks (CNN) based video action detection approaches usually consist of two major steps: frame-level action proposal generation and association of proposals across frames. Also, most of these methods employ two-stream CNN framework to handle spatial and temporal feature separately. In this paper, we propose an end-to-end deep network called Tube Convolutional Neural Network (T-CNN) for action detection in videos. The proposed architecture is a unified deep network that is able to recognize and localize action based on 3D convolution features. A video is first divided into equal length clips and next for each clip a set of tube proposals are generated based on 3D Convolutional Network (ConvNet) features. Finally, the tube proposals of different clips are linked together employing network flow and spatio-temporal action detection is performed using these linked video proposals. Extensive experiments on several video datasets demonstrate the superior performance of T-CNN for classifying and localizing actions in both trimmed and untrimmed videos compared to state-of-the-arts.", "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).", "Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).", "Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 72.8 ).", "Even with the recent advances in convolutional neural networks (CNN) in various visual recognition tasks, the state-of-the-art action recognition system still relies on hand crafted motion feature such as optical flow to achieve the best performance. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. We additionally provide insights to how the quality of the learned optical flow affects the action recognition. Our model significantly improves action recognition accuracy by a large margin 31 compared to state-of-the-art CNN-based action recognition models trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51." ] }
1907.08895
2963587483
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
The success of CNN-based approaches for image classification @cite_2 @cite_22 have led to dramatic advances in image segmentation @cite_37 @cite_24 . Many of the segmentation approaches leverage recognition models trained on ImageNet and replace the fully-connected layers with 1 @math 1 kernel convolutions to generate dense (pixel-wise) labels. Recently, the encoder-decoder style network architecture, such as SegNet @cite_16 and U-Net @cite_48 , has been the main stream design for semantic segmentation. Moreover, pyramid pooling @cite_32 @cite_7 and dilated convolution @cite_21 @cite_7 @cite_25 are effective techniques to improve the segmentation accuracy. Our 3D CNN also builds upon the encoder-decoder structure for video object segmentation.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_7", "@cite_48", "@cite_21", "@cite_32", "@cite_24", "@cite_2", "@cite_16", "@cite_25" ], "mid": [ "2953264111", "2415731916", "2587989515", "1714639292", "1929903369", "2950975557", "2963881378", "2469938794", "1945099168", "2462457117" ], "abstract": [ "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labeled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pretrained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modeling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labelled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pre-trained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modelling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "In this paper we address the problem of semantic labeling of indoor scenes on RGB-D data. With the availability of RGB-D cameras, it is expected that additional depth measurement will improve the accuracy. Here we investigate a solution how to incorporate complementary depth information into a semantic segmentation framework by making use of convolutional neural networks (CNNs). Recently encoder-decoder type fully convolutional CNN architectures have achieved a great success in the field of semantic segmentation. Motivated by this observation we propose an encoder-decoder type network, where the encoder part is composed of two branches of networks that simultaneously extract features from RGB and depth images and fuse depth features into the RGB feature maps as the network goes deeper. Comprehensive experimental evaluations demonstrate that the proposed fusion-based architecture achieves competitive results with the state-of-the-art methods on the challenging SUN RGB-D benchmark obtaining 76.27 global accuracy, 48.30 average class accuracy and 37.29 average intersection-over-union score.", "Generating natural language descriptions for in-the-wild videos is a challenging task. Most state-of-the-art methods for solving this problem borrow existing deep convolutional neural network (CNN) architectures (AlexNet, GoogLeNet) to extract a visual representation of the input video. However, these deep CNN architectures are designed for single-label centered-positioned object classification. While they generate strong semantic features, they have no inherent structure allowing them to detect multiple objects of different sizes and locations in the frame. Our paper tries to solve this problem by integrating the base CNN into several fully convolutional neural networks (FCNs) to form a multi-scale network that handles multiple receptive field sizes in the original image. FCNs, previously applied to image segmentation, can generate class heat-maps efficiently compared to sliding window mechanisms, and can easily handle multiple scales. To further handle the ambiguity over multiple objects and locations, we incorporate the Multiple Instance Learning mechanism (MIL) to consider objects in different positions and at different scales simultaneously. We integrate our multi-scale multi-instance architecture with a sequence-to-sequence recurrent neural network to generate sentence descriptions based on the visual representation. Ours is the first end-to-end trainable architecture that is capable of multi-scale region processing. Evaluation on a Youtube video dataset shows the advantage of our approach compared to the original single-scale whole frame CNN model. Our flexible and efficient architecture can potentially be extended to support other video processing tasks.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.", "State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions. Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train. In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets. Code to reproduce the experiments is available here : this https URL", "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .", "This paper describes a deep learning approach to semantic segmentation of very high resolution (aerial) images. Deep neural architectures hold the promise of end-to-end learning from raw images, making heuristic feature design obsolete. Over the last decade this idea has seen a revival, and in recent years deep convolutional neural networks (CNNs) have emerged as the method of choice for a range of image interpretation tasks like visual recognition and object detection. Still, standard CNNs do not lend themselves to per-pixel semantic segmentation, mainly because one of their fundamental principles is to gradually aggregate information over larger and larger image regions, making it hard to disentangle contributions from different pixels. Very recently two extensions of the CNN framework have made it possible to trace the semantic information back to a precise pixel position: deconvolutional network layers undo the spatial downsampling, and Fully Convolution Networks (FCNs) modify the fully connected classification layers of the network in such a way that the location of individual activations remains explicit. We design a FCN which takes as input intensity and range data and, with the help of aggressive deconvolution and recycling of early network layers, converts them into a pixelwise classification at full resolution. We discuss design choices and intricacies of such a network, and demonstrate that an ensemble of several networks achieves excellent results on challenging data such as the ISPRS semantic labeling benchmark, using only the raw data as input.", "We present a two-module approach to semantic segmentation that incorporates Convolutional Networks (CNNs) and Graphical Models. Graphical models are used to generate a small (5-30) set of diverse segmentations proposals, such that this set has high recall. Since the number of required proposals is so low, we can extract fairly complex features to rank them. Our complex feature of choice is a novel CNN called SegNet, which directly outputs a (coarse) semantic segmentation. Importantly, SegNet is specifically trained to optimize the corpus-level PASCAL IOU loss function. To the best of our knowledge, this is the first CNN specifically designed for semantic segmentation. This two-module approach achieves @math on the PASCAL 2012 segmentation challenge.", "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40 vs the best previous precision of 74.00 for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14 accuracy on CUB-2011." ] }
1907.08895
2963587483
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
Video object segmentation @cite_5 @cite_26 aims to delineate the foreground object(s) from the background in each frame. Semi-supervised segmentation pipelines @cite_15 @cite_42 assume that the segmentation mask of the first frame in the sequence during testing is given, and exploit temporal consistency in video sequences to propagate the initial segmentation mask to subsequent frames.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_26", "@cite_42" ], "mid": [ "2329995605", "1989348325", "2155598147", "2462481369" ], "abstract": [ "Automatic discovery of foreground objects in video sequences is important in computer vision, with applications to object tracking, video segmentation and weakly supervised learning. This task is related to cosegmentation [4, 5] and weakly supervised localization [2, 6]. We propose an efficient method for the simultaneous discovery of foreground objects in video and their segmentation masks across multiple frames. We offer a graph matching formulation for bounding box selection and refinement using second and higher order terms. It is based on an Integer Quadratic Programming formulation and related to graph matching and MAP inference [3]. We take into consideration local frame-based information as well as spatiotemporal and appearance consistency over multiple frames. Our approach consists of three stages. First, we find an initial pool of candidate boxes using a novel and fast foreground estimation method in video (VideoPCA) based on Principal Component Analysis of the video content. The output of VideoPCA combined with Edge Boxes [8] is then used to produce high quality bounding box proposals. Second, we efficiently match bounding boxes across multiple frames, using the IPFP algorithm [3] with pairwise geometric and appearance terms. Third, we optimize the higher order terms using the Mean-Shift algorithm [1] to refine the box locations and establish appearance regularity over multiple frames. We make the following contributions:", "We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.", "In this paper, we propose a novel approach to extract primary object segments in videos in the object proposal' domain. The extracted primary object regions are then used to build object models for optimized video segmentation. The proposed approach has several contributions: First, a novel layered Directed Acyclic Graph (DAG) based framework is presented for detection and segmentation of the primary object in video. We exploit the fact that, in general, objects are spatially cohesive and characterized by locally smooth motion trajectories, to extract the primary object from the set of all available proposals based on motion, appearance and predicted-shape similarity across frames. Second, the DAG is initialized with an enhanced object proposal set where motion based proposal predictions (from adjacent frames) are used to expand the set of object proposals for a particular frame. Last, the paper presents a motion scoring function for selection of object proposals that emphasizes high optical flow gradients at proposal boundaries to discriminate between moving objects and the background. The proposed approach is evaluated using several challenging benchmark videos and it outperforms both unsupervised and supervised state-of-the-art methods.", "An unsupervised video object segmentation algorithm, which discovers a primary object in a video sequence automatically, is proposed in this work. We introduce three energies in terms of foreground and background probability distributions: Markov, spatiotemporal, and antagonistic energies. Then, we minimize a hybrid of the three energies to separate a primary object from its background. However, the hybrid energy is nonconvex. Therefore, we develop the alternate convex optimization (ACO) scheme, which decomposes the nonconvex optimization into two quadratic programs. Moreover, we propose the forward-backward strategy, which performs the segmentation sequentially from the first to the last frames and then vice versa, to exploit temporal correlations. Experimental results on extensive datasets demonstrate that the proposed ACO algorithm outperforms the state-of-the-art techniques significantly." ] }
1907.08895
2963587483
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
In the more challenging unsupervised setting, as we address in this paper, no object mask is provided as initialization during the test phase. Unsupervised segmentation has been addressed by several variants of CNN-based models, such as the two-stream architecture @cite_14 @cite_29 , recurrent neural networks @cite_40 @cite_30 and multi-scale feature fusion @cite_4 @cite_19 . These approaches generally perform much better than traditional clustering-based pipelines @cite_28 . The core idea behind such approaches involves leveraging motion cues explicitly (via optical flow) using a two-steam network @cite_3 @cite_4 @cite_34 , and or employing a memory module to capture the evolution of object appearance over time @cite_40 @cite_34 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_28", "@cite_29", "@cite_3", "@cite_19", "@cite_40", "@cite_34" ], "mid": [ "1714639292", "2953264111", "2526782364", "2415731916", "1929903369", "2563705555", "2951402970", "2963951674", "2964088293" ], "abstract": [ "Generating natural language descriptions for in-the-wild videos is a challenging task. Most state-of-the-art methods for solving this problem borrow existing deep convolutional neural network (CNN) architectures (AlexNet, GoogLeNet) to extract a visual representation of the input video. However, these deep CNN architectures are designed for single-label centered-positioned object classification. While they generate strong semantic features, they have no inherent structure allowing them to detect multiple objects of different sizes and locations in the frame. Our paper tries to solve this problem by integrating the base CNN into several fully convolutional neural networks (FCNs) to form a multi-scale network that handles multiple receptive field sizes in the original image. FCNs, previously applied to image segmentation, can generate class heat-maps efficiently compared to sliding window mechanisms, and can easily handle multiple scales. To further handle the ambiguity over multiple objects and locations, we incorporate the Multiple Instance Learning mechanism (MIL) to consider objects in different positions and at different scales simultaneously. We integrate our multi-scale multi-instance architecture with a sequence-to-sequence recurrent neural network to generate sentence descriptions based on the visual representation. Ours is the first end-to-end trainable architecture that is capable of multi-scale region processing. Evaluation on a Youtube video dataset shows the advantage of our approach compared to the original single-scale whole frame CNN model. Our flexible and efficient architecture can potentially be extended to support other video processing tasks.", "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labeled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pretrained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modeling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labelled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pre-trained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modelling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Deep convolutional neural networks (CNNs) have become a key element in the recent breakthrough of salient object detection. However, existing CNN-based methods are based on either patchwise (regionwise) training and inference or fully convolutional networks. Methods in the former category are generally time-consuming due to severe storage and computational redundancies among overlapping patches. To overcome this deficiency, methods in the second category attempt to directly map a raw input image to a predicted dense saliency map in a single network forward pass. Though being very efficient, it is arduous for these methods to detect salient objects of different scales or salient regions with weak semantic information. In this paper, we develop hybrid contrast-oriented deep neural networks to overcome the aforementioned limitations. Each of our deep networks is composed of two complementary components, including a fully convolutional stream for dense prediction and a segment-level spatial pooling stream for sparse saliency inference. We further propose an attentional module that learns weight maps for fusing the two saliency predictions from these two streams. A tailored alternate scheme is designed to train these deep networks by fine-tuning pretrained baseline models. Finally, a customized fully connected conditional random field model incorporating a salient contour feature embedding can be optionally applied as a postprocessing step to improve spatial coherence and contour positioning in the fused result from these two streams. Extensive experiments on six benchmark data sets demonstrate that our proposed model can significantly outperform the state of the art in terms of all popular evaluation metrics.", "Recent works have shown that exploiting multi-scale representations deeply learned via convolutional neural networks (CNN) is of tremendous importance for accurate contour detection. This paper presents a novel approach for predicting contours which advances the state of the art in two fundamental aspects, i.e. multi-scale feature generation and fusion. Different from previous works directly considering multi-scale feature maps obtained from the inner layers of a primary CNN architecture, we introduce a hierarchical deep model which produces more rich and complementary representations. Furthermore, to refine and robustly fuse the representations learned at different scales, the novel Attention-Gated Conditional Random Fields (AG-CRFs) are proposed. The experiments ran on two publicly available datasets (BSDS500 and NYUDv2) demonstrate the effectiveness of the latent AG-CRF model and of the overall hierarchical framework." ] }
1907.08895
2963587483
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
Action segmentation provides pixel-level localization for actions ( action segmentation maps), which are more accurate than bounding boxes for action localization. Lu al @cite_35 propose supervoxel hierarchy to enforce the consistency of the human segmentation in video. Gavrilyuk al @cite_17 infer pixel-level segmentation of an actor and its action in video from a natural language input sentence.
{ "cite_N": [ "@cite_35", "@cite_17" ], "mid": [ "2264423349", "1912148408" ], "abstract": [ "Actor-action semantic segmentation made an important step toward advanced video understanding problems: what action is happening; who is performing the action; and where is the action in space-time. Current models for this problem are local, based on layered CRFs, and are unable to capture long-ranging interaction of video parts. We propose a new model that combines these local labeling CRFs with a hierarchical supervoxel decomposition. The supervoxels provide cues for possible groupings of nodes, at various scales, in the CRFs to encourage adaptive, high-order groups for more effective labeling. Our model is dynamic and continuously exchanges information during inference: the local CRFs influence what supervoxels in the hierarchy are active, and these active nodes influence the connectivity in the CRF; we hence call it a grouping process model. The experimental results on a recent large-scale video dataset show a large margin of 60 relative improvement over the state of the art, which demonstrates the effectiveness of the dynamic, bidirectional flow between labeling and grouping.", "Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation." ] }
1907.08845
2962687069
Predicting future frames in natural video sequences is a new challenge that is receiving increasing attention in the computer vision community. However, existing models suffer from severe loss of temporal information when the predicted sequence is long. Compared to previous methods focusing on generating more realistic contents, this paper extensively studies the importance of sequential order information for video generation. A novel Shuffling sEquence gEneration network (SEE-Net) is proposed that can learn to discriminate unnatural sequential orders by shuffling the video frames and comparing them to the real video sequence. Systematic experiments on three datasets with both synthetic and real-world videos manifest the effectiveness of shuffling sequence generation for video prediction in our proposed model and demonstrate state-of-the-art performance by both qualitative and quantitative evaluations. The source code is available at this https URL.
The task of video frame prediction has received growing attention in the computer vision community. Early work investigates object motion prediction @cite_28 . Advanced neural network approaches were then applied to directly predict future frames @cite_32 @cite_1 @cite_11 @cite_10 . Mathieu al @cite_32 proposed a multi-scale auto-encoder network with both gradient difference loss and adversarial loss. PredNet @cite_1 is inspired by the concept of predictive coding from the neuroscience literature. Each layer in the PredNet model produced local predictions and only forward deviations from these predictions to the subsequent network layers. Vondrick al @cite_11 @cite_10 conducts a deep regression network to predict future frame representations. Unlike future frame prediction, Babaeizadeh al @cite_2 and Lee al @cite_31 address the video prediction by stochastic approaches that can predict a different possible future for each sample of its latent variables. A shared drawback of these methods is lack of explicit control of temporal information extraction, and therefore our work disentangles the motion information from video frames to better learn temporal information.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_32", "@cite_2", "@cite_31", "@cite_10", "@cite_11" ], "mid": [ "2751683986", "2964151830", "2401640538", "2963253230", "2613736160", "2521071105", "2963665412" ], "abstract": [ "Predicting the future from a sequence of video frames has been recently a sought after yet challenging task in the field of computer vision and machine learning. Although there have been efforts for tracking using motion trajectories and flow features, the complex problem of generating unseen frames has not been studied extensively. In this paper, we deal with this problem using convolutional models within a multi-stage Generative Adversarial Networks (GAN) framework. The proposed method uses two stages of GANs to generate a crisp and clear set of future frames. Although GANs have been used in the past for predicting the future, none of the works consider the relation between subsequent frames in the temporal dimension. Our main contribution lies in formulating two objective functions based on the Normalized Cross Correlation (NCC) and the Pairwise Contrastive Divergence (PCD) for solving this problem. This method, coupled with the traditional L1 loss, has been experimented with three real-world video datasets, viz. Sports-1M, UCF-101 and the KITTI. Performance analysis reveals superior results over the recent state-of-the-art methods.", "While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (\"PredNet\") architecture that is inspired by the concept of \"predictive coding\" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.", "While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (\"PredNet\") architecture that is inspired by the concept of \"predictive coding\" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. Altogether, these results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.", "We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.", "Given a visual history, multiple future outcomes for a video scene are equally probable, in other words, the distribution of future outcomes has multiple modes. Multimodality is notoriously hard to handle by standard regressors or classifiers: the former regress to the mean and the latter discretize a continuous high dimensional output space. In this work, we present stochastic neural network architectures that handle such multimodality through stochasticity: future trajectories of objects, body joints or frames are represented as deep, non-linear transformations of random (as opposed to deterministic) variables. Such random variables are sampled from simple Gaussian distributions whose means and variances are parametrized by the output of convolutional encoders over the visual history. We introduce novel convolutional architectures for predicting future body joint trajectories that outperform fully connected alternatives DBLP:journals corr WalkerDGH16 . We introduce stochastic spatial transformers through optical flow warping for predicting future frames, which outperform their deterministic equivalents DBLP:journals corr PatrauceanHC15 . Training stochastic networks involves an intractable marginalization over stochastic variables. We compare various training schemes that handle such marginalization through a) straightforward sampling from the prior, b) conditional variational autoencoders NIPS2015_5775,DBLP:journals corr WalkerDGH16 , and, c) a proposed K-best-sample loss that penalizes the best prediction under a fixed \"prediction budget\". We show experimental results on object trajectory prediction, human body joint trajectory prediction and video prediction under varying future uncertainty, validating quantitatively and qualitatively our architectural choices and training schemes.", "We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.", "We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered." ] }
1907.08845
2962687069
Predicting future frames in natural video sequences is a new challenge that is receiving increasing attention in the computer vision community. However, existing models suffer from severe loss of temporal information when the predicted sequence is long. Compared to previous methods focusing on generating more realistic contents, this paper extensively studies the importance of sequential order information for video generation. A novel Shuffling sEquence gEneration network (SEE-Net) is proposed that can learn to discriminate unnatural sequential orders by shuffling the video frames and comparing them to the real video sequence. Systematic experiments on three datasets with both synthetic and real-world videos manifest the effectiveness of shuffling sequence generation for video prediction in our proposed model and demonstrate state-of-the-art performance by both qualitative and quantitative evaluations. The source code is available at this https URL.
Several works utilise shuffle based self-supervised learning methods on videos, which do not require external annotations @cite_40 @cite_5 @cite_16 . In @cite_40 , based on ordinal supervision provided by visual tracking, Wang and Gupta designed a Siamese-triplet network with a ranking loss function to learn the visual representations. Misra al @cite_5 proposed a self-supervised approach using the convolutional neural network (CNN) for a sequential verification task, where the correct and incorrect order frames are formed into positive and negative samples respectively to train their model. Lee al @cite_16 presented a self-supervised representation learning approach using temporal shuffled video frames without semantic labels and trained a convolutional neural network to sort the shuffled sequences and output the corrects. In this work, we apply the shuffle based self-supervised learning method on optical flow images to extract the ordinal information from the motion of objects, surfaces, and edges.
{ "cite_N": [ "@cite_5", "@cite_40", "@cite_16" ], "mid": [ "2950809610", "219040644", "2518754566" ], "abstract": [ "We propose a new self-supervised CNN pre-training technique based on a novel auxiliary task called \"odd-one-out learning\". In this task, the machine is asked to identify the unrelated or odd element from a set of otherwise related elements. We apply this technique to self-supervised video representation learning where we sample subsequences from videos and ask the network to learn to predict the odd video subsequence. The odd video subsequence is sampled such that it has wrong temporal order of frames while the even ones have the correct temporal order. Therefore, to generate a odd-one-out question no manual annotation is required. Our learning machine is implemented as multi-stream convolutional neural network, which is learned end-to-end. Using odd-one-out networks, we learn temporal representations for videos that generalizes to other related tasks such as action recognition. On action classification, our method obtains 60.3 on the UCF101 dataset using only UCF101 data for training which is approximately 10 better than current state-of-the-art self-supervised learning methods. Similarly, on HMDB51 dataset we outperform self-supervised state-of-the art methods by 12.7 on action classification task.", "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.", "Learning rich visual representations often require training on datasets of millions of manually annotated examples. This substantially limits the scalability of learning effective representations as labeled data is expensive or scarce. In this paper, we address the problem of unsupervised visual representation learning from a large, unlabeled collection of images. By representing each image as a node and each nearest-neighbor matching pair as an edge, our key idea is to leverage graph-based analysis to discover positive and negative image pairs (i.e., pairs belonging to the same and different visual categories). Specifically, we propose to use a cycle consistency criterion for mining positive pairs and geodesic distance in the graph for hard negative mining. We show that the mined positive and negative image pairs can provide accurate supervisory signals for learning effective representations using Convolutional Neural Networks (CNNs). We demonstrate the effectiveness of the proposed unsupervised constraint mining method in two settings: (1) unsupervised feature learning and (2) semi-supervised learning. For unsupervised feature learning, we obtain competitive performance with several state-of-the-art approaches on the PASCAL VOC 2007 dataset. For semi-supervised learning, we show boosted performance by incorporating the mined constraints on three image classification datasets." ] }
1907.08941
2962880793
This paper conducts research on the short-term electric load forecast method under the background of big data. It builds a new electric load forecast model based on Deep Auto-Encoder Networks (DAENs), which takes into account multidimensional load-related data sets including historical load value, temperature, day type, etc. A new distributed short-term load forecast method based on TensorFlow and DAENs is therefore proposed, with an algorithm flowchart designed. This method overcomes the shortcomings of traditional neural network methods, such as over-fitting, slow convergence and local optimum, etc. Case study results show that the proposed method has obvious advantages in prediction accuracy, stability, and expansibility compared with those based on traditional neural networks. Thus, this model can better meet the demands of short-term electric load forecasting under big data scenario.
Auto-Encoder networks have been widely explored in the past years. Reference @cite_4 introduces a new method to analyze the human immunodeficiency virus using a combination of Auto-Encoder networks and genetic algorithms, which outperforms the conventional feedforward neural network models and is a much better classifier. Reference @cite_8 extracts and composes robust features by using denoising Auto-Encoders, which shows surprising results. In reference @cite_19 , an algorithm that aids the curation of gene annotations and predicts previously-unidentified gene functions is designed by using deep Auto-Encoder neural networks. Experiments show that deep Auto-Encoder neural networks achieve better performance than other standard machine learning methods, including the popular truncated singular value decomposition. Reference @cite_30 proposes a novel method for modeling the excitation through an Auto-Encoder, which produces speech of higher perceptual quality compared to conventional pulse-excited speech signals. Reference @cite_16 uses Auto-Encoder neural networks for water demand predictive modeling. In the research done above, Auto-Encoder networks have been successfully used for solving classification or predication problems.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_8", "@cite_19", "@cite_16" ], "mid": [ "2517380078", "2145094598", "2962891349", "2963175699", "2560266880" ], "abstract": [ "Following a number of studies that have interrogated the usability of an autoencoder neural network in various classification and regression approximation problems, this manuscript focuses on its usability in water demand predictive modelling, with the Gauteng Province of the Republic of South Africa being chosen as a case study. Water demand predictive modelling is a regression approximation problem. This autoencoder network is constructed from a simple multi-layer network, with a total of 6 parameters in both the input and output units, and 5 nodes in the hidden unit. These 6 parameters include a figure that represents population size and water demand values of 5 consecutive days. The water demand value of the fifth day is the variable of interest, that is, the variable that is being predicted. The optimum number of nodes in the hidden unit is determined through the use of a simple, less computationally expensive technique. The performance of this network is measured against prediction accuracy, average prediction error, and the time it takes the network to generate a single output. The dimensionality of the network is also taken into consideration. In order to benchmark the performance of this autoencoder network, a conventional neural network is also implemented and evaluated using the same measures of performance. The conventional network is slightly outperformed by the autoencoder network.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder.", "In hybrid hidden Markov model artificial neural networks (HMM ANN) automatic speech recognition (ASR) system, the phoneme class conditional probabilities are estimated by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or and speech production knowledge, and, then modeling the acoustic features with an ANN. Recent advances in machine learning techniques, more specifically in the field of image processing and text processing, have shown that such divide and conquer strategy (i.e., separating feature extraction and modeling steps) may not be necessary. Motivated from these studies, in the framework of convolutional neural networks (CNNs), this paper investigates a novel approach, where the input to the ANN is raw speech signal and the output is phoneme class conditional probability estimates. On TIMIT phoneme recognition task, we study different ANN architectures to show the benefit of CNNs and compare the proposed approach against conventional approach where, spectral-based feature MFCC is extracted and modeled by a multilayer perceptron. Our studies show that the proposed approach can yield comparable or better phoneme recognition performance when compared to the conventional approach. It indicates that CNNs can learn features relevant for phoneme classification automatically from the raw speech signal.", "Existing techniques of watermarking make use of transform domain to have better robustness towards attacks. Here, we propose a novel learning based auto-encoder Convolutional Neural Network (CNN) for non-blind watermarking which outperforms the existing frequency domain techniques in terms of imperceptibility and robustness adding new dimension of usage of CNNs towards security. As these CNNs efficiently learn the features and represent the input at the output, they find applications in all the fields of science. Code book images of different size are generated using the proposed architecture and subjected to different attacks. Results of the proposed method are compared with state of the art methods at different noises and attacks such as Gaussian, speckle, compression effects, cropping, filtering, etc. The proposed scheme is validated against various possible attacks and its out-performance with state of the art methods is presented. Further, transfer learning capabilities of auto-encoder CNN for efficient way of learning new code book is presented. The inability of intruder towards retrieval of data without the knowledge of architecture and keys employed is also discussed." ] }
1907.08873
2963235411
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90 accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future.
Cyberbullying in Social Media. Focusing more on cyberbullying behaviors, @cite_67 initially decompose such a phenomenon into a set of sensitive topics, i.e., race, culture, sexuality, and intelligence. Then, they analyze YouTube comments from controversial videos based on a bag-of-words-driven text classification. Also, @cite_50 presents a method which automatically detects bully users on YouTube based on a bulliness'' score. @cite_20 aim to detect fine-grained types of cyberbullying, e.g., threats and insults, with the consideration of linguistic characteristics in cyberbullying related content extracted from Ask.fm. In the conducted analysis they consider three types of possible participants in a cyberbullying conversation, i.e., victim, harasser, and bystander which is further decomposed to bystander-defenders and bystander-assistants, who support, respectively, the victim or the harasser. @cite_41 , exploit Twitter messages to detect bullying cases which are specifically related to the gender bullying phenomenon. @cite_70 , in addition to the comments posted on Instagram, they also consider the corresponding images in an effort to detect and distinguish between cyberbullying and cyberaggression. Finally, @cite_31 detect cyberbullying words and rumor texts on Twitter, as well as demographics about bullies such as their name, age, and gender.
{ "cite_N": [ "@cite_67", "@cite_41", "@cite_70", "@cite_50", "@cite_31", "@cite_20" ], "mid": [ "2164628858", "2594902547", "1823790170", "2572180805", "2283668614", "2044173330" ], "abstract": [ "As a result of the invention of social networks, friendships, relationships and social communication are all undergoing changes and new definitions seem to be applicable. One may have hundreds of ‘friends’ without even seeing their faces. Meanwhile, alongside this transition there is increasing evidence that online social applications are used by children and adolescents for bullying. State-of-the-art studies in cyberbullying detection have mainly focused on the content of the conversations while largely ignoring the characteristics of the actors involved in cyberbullying. Social studies on cyberbullying reveal that the written language used by a harasser varies with the author’s features including gender. In this study we used a support vector machine model to train a gender-specific text classifier. We demonstrated that taking gender-specific language features into account improves the discrimination capacity of a classifier to detect cyberbullying.", "In recent years, bullying and aggression against social media users have grown significantly, causing serious consequences to victims of all demographics. Nowadays, cyberbullying affects more than half of young social media users worldwide, suffering from prolonged and or coordinated digital harassment. Also, tools and technologies geared to understand and mitigate it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of bullies and aggressors, and what features distinguish them from regular users. We find that bullies post less, participate in fewer online communities, and are less popular than normal users. Aggressors are relatively popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, with over 90 AUC.", "Cyberbullying is becoming a major concern in online environments with troubling consequences. However, most of the technical studies have focused on the detection of cyberbullying through identifying harassing comments rather than preventing the incidents by detecting the bullies. In this work we study the automatic detection of bully users on YouTube. We compare three types of automatic detection: an expert system, supervised machine learning models, and a hybrid type combining the two. All these systems assign a score indicating the level of “bulliness” of online bullies. We demonstrate that the expert system outperforms the machine learning models. The hybrid classifier shows an even better performance.", "We study detection of cyberbullying in photo-sharing networks, with an eye on developing early-warning mechanisms for the prediction of posted images vulnerable to attacks. Given the overwhelming increase in media accompanying text in online social networks, we investigate use of posted images and captions for improved detection of bullying in response to shared content. We validate our approaches on a dataset of over 3000 images along with peer-generated comments posted on the Instagram photo-sharing network, running comprehensive experiments using a variety of classifiers and feature sets. In addition to standard image and text features, we leverage several novel features including topics determined from image captions and a pretrained convolutional neural network on image pixels. We identify the importance of these advanced features in assisting detection of cyberbullying in posted comments. We also provide results on classification of images and captions themselves as potential targets for cyberbullies.", "The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20 to 40 of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.", "Cyberbullying (harassment on social networks) is widely recognized as a serious social problem, especially for adolescents. It is as much a threat to the viability of online social networks for youth today as spam once was to email in the early days of the Internet. Current work to tackle this problem has involved social and psychological studies on its prevalence as well as its negative effects on adolescents. While true solutions rest on teaching youth to have healthy personal relationships, few have considered innovative design of social network software as a tool for mitigating this problem. Mitigating cyberbullying involves two key components: robust techniques for effective detection and reflective user interfaces that encourage users to reflect upon their behavior and their choices. Spam filters have been successful by applying statistical approaches like Bayesian networks and hidden Markov models. They can, like Google’s GMail, aggregate human spam judgments because spam is sent nearly identically to many people. Bullying is more personalized, varied, and contextual. In this work, we present an approach for bullying detection based on state-of-the-art natural language processing and a common sense knowledge base, which permits recognition over a broad spectrum of topics in everyday life. We analyze a more narrow range of particular subject matter associated with bullying (e.g. appearance, intelligence, racial and ethnic slurs, social acceptance, and rejection), and construct BullySpace, a common sense knowledge base that encodes particular knowledge about bullying situations. We then perform joint reasoning with common sense knowledge about a wide range of everyday life topics. We analyze messages using our novel AnalogySpace common sense reasoning technique. We also take into account social network analysis and other factors. We evaluate the model on real-world instances that have been reported by users on Formspring, a social networking website that is popular with teenagers. On the intervention side, we explore a set of reflective user-interaction paradigms with the goal of promoting empathy among social network participants. We propose an “air traffic control”-like dashboard, which alerts moderators to large-scale outbreaks that appear to be escalating or spreading and helps them prioritize the current deluge of user complaints. For potential victims, we provide educational material that informs them about how to cope with the situation, and connects them with emotional support from others. A user evaluation shows that in-context, targeted, and dynamic help during cyberbullying situations fosters end-user reflection that promotes better coping strategies." ] }
1907.08873
2963235411
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90 accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future.
Abusive Incidents in Game Communities. The rise of cyberbullying and abusive incidents, in general, is also evident in online game communities. Since these communities are widely used by people of all ages, such a phenomenon has attracted the interest of the research community. For instance, @cite_61 studies cyberbullying and other toxic behaviors in team competition online games in an effort to detect, prevent, and counter-act toxic behavior. @cite_13 , investigate the prevalence of sexism in online game communities finding personality traits, demographic variables, and levels of game-play predicted sexist attitudes towards women who play video games.
{ "cite_N": [ "@cite_61", "@cite_13" ], "mid": [ "2078178105", "2949748152" ], "abstract": [ "In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior.", "In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior." ] }
1907.08873
2963235411
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90 accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future.
Abusive Detection Methods. Various supervised approaches have been used for monitoring different instances of online abusive behaviors. For instance, authors in @cite_74 use a regression model, whereas @cite_50 @cite_67 @cite_20 rely on other methods like Naive Bayes, Support Vector Machines (SVM), and Decision Trees (J48). In contrast, @cite_94 use a graph-based approach based on likes and comments to build bipartite graphs and identify negative behavior. A similar, graph-based approach is also used in @cite_70 . In all previous works a variety of attributes has been exploited in an effort to detect accurately harassment instances. Text-related features, such as punctuation, URLs, part-of-speech, n-grams, Bag of Words (BoW), lexical features that rely on dictionaries of offensive words, and user-related ones, e.g., user's membership duration activity, number of friends followers, are among the most popular ones.
{ "cite_N": [ "@cite_67", "@cite_70", "@cite_94", "@cite_50", "@cite_74", "@cite_20" ], "mid": [ "2786315637", "2119769989", "2047449974", "2610483642", "2160685721", "2622349515" ], "abstract": [ "Hate speech, offensive language, sexism, racism and other types of abusive behavior have become a common phenomenon in many online social media platforms. In recent years, such diverse abusive behaviors have been manifesting with increased frequency and levels of intensity. This is due to the openness and willingness of popular media platforms, such as Twitter and Facebook, to host content of sensitive or controversial topics. However, these platforms have not adequately addressed the problem of online abusive behavior, and their responsiveness to the effective detection and blocking of such inappropriate behavior remains limited. In the present paper, we study this complex problem by following a more holistic approach, which considers the various aspects of abusive behavior. To make the approach tangible, we focus on Twitter data and analyze user and textual properties from different angles of abusive posting behavior. We propose a deep learning architecture, which utilizes a wide variety of available metadata, and combines it with automatically-extracted hidden patterns within the text of the tweets, to detect multiple abusive behavioral norms which are highly inter-related. We apply this unified architecture in a seamless, transparent fashion to detect different types of abusive behavior (hate speech, sexism vs. racism, bullying, sarcasm, etc.) without the need for any tuning of the model architecture for each task. We test the proposed approach with multiple datasets addressing different and multiple abusive behaviors on Twitter. Our results demonstrate that it largely outperforms the state-of-art methods (between 21 and 45 improvement in AUC, depending on the dataset).", "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open t he door for inappropriate online activities, such as harassment, i n which some users post messages in a virtual community that are intention- ally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently fe w systems attempt to solve this problem. In this paper, we use a supervised learning approach for dete ct- ing harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experi mental results described herein show that our method achieves significant improvements over several baselines, including Term Frequency- Inverse Document Frequency (TFIDF) approaches. Identifica tion of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.", "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1 over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7 , while keeping the false positive rate (FP) at the same level as the baseline at about 3.77 . Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "Since its earliest days, harassment and abuse have plagued the Internet. Recent research has focused on in-domain methods to detect abusive content and faces several challenges, most notably the need to obtain large training corpora. In this paper, we introduce a novel computational approach to address this problem called Bag of Communities (BoC)---a technique that leverages large-scale, preexisting data from other Internet communities. We then apply BoC toward identifying abusive behavior within a major Internet community. Specifically, we compute a post's similarity to 9 other communities from 4chan, Reddit, Voat and MetaFilter. We show that a BoC model can be used on communities \"off the shelf\" with roughly 75 accuracy---no training examples are needed from the target community. A dynamic BoC model achieves 91.18 accuracy after seeing 100,000 human-moderated posts, and uniformly outperforms in-domain methods. Using this conceptual and empirical work, we argue that the BoC approach may allow communities to deal with a range of common problems, like abusive behavior, faster and with fewer engineering resources.", "Since the textual contents on online social media are highly unstructured, informal, and often misspelled, existing research on message-level offensive language detection cannot accurately detect offensive content. Meanwhile, user-level offensiveness detection seems a more feasible approach but it is an under researched area. To bridge this gap, we propose the Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potential offensive users in social media. We distinguish the contribution of pejoratives profanities and obscenities in determining offensive content, and introduce hand-authoring syntactic rules in identifying name-calling harassments. In particular, we incorporate a user's writing style, structure and specific cyber bullying content as features to predict the user's potentiality to send out offensive content. Results from experiments showed that our LSF framework performed significantly better than existing methods in offensive content detection. It achieves precision of 98.24 and recall of 94.34 in sentence offensive detection, as well as precision of 77.9 and recall of 77.8 in user offensive detection. Meanwhile, the processing speed of LSF is approximately 10msec per sentence, suggesting the potential for effective deployment in social media.", "Automatic abusive language detection is a difficult but important task for online social media. Our research explores a two-step approach of performing classification on abusive language and then classifying into specific types and compares it with one-step approach of doing one multi-class classification for detecting sexist and racist languages. With a public English Twitter corpus of 20 thousand tweets in the type of sexism and racism, our approach shows a promising performance of 0.827 F-measure by using HybridCNN in one-step and 0.824 F-measure by using logistic regression in two-steps." ] }
1907.08873
2963235411
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90 accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future.
Remarks. This article presents in a unified way and, more importantly, extends our previous work on aggressive behavior in Twitter, published in @cite_47 @cite_60 @cite_64 . Compared to the literature, we advance the state-of-the-art on cyberbullying and cyberaggression analysis and detection along the following dimensions: Propose a robust methodology for extracting user, text, and network features on Twitter, beyond what has been shown in the past. Analyze user tweets, individually and in groups, and extract appropriate features connecting user behavior with a tendency of aggression or bullying. Investigate Gamergate and BBC gender pay controversies and compare their activity and in-depth topics of discussion with users discussing normal topics (e.g., NBA). Compare performance of various types of machine learning algorithms, including random forests and neural networks, for the detection of bullying and aggression on Twitter under different setups. Contrast the performance of the best machine learning method with the suspension and deletion of offensive users from Twitter through time.
{ "cite_N": [ "@cite_47", "@cite_64", "@cite_60" ], "mid": [ "2594902547", "2047449974", "2612949191" ], "abstract": [ "In recent years, bullying and aggression against social media users have grown significantly, causing serious consequences to victims of all demographics. Nowadays, cyberbullying affects more than half of young social media users worldwide, suffering from prolonged and or coordinated digital harassment. Also, tools and technologies geared to understand and mitigate it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of bullies and aggressors, and what features distinguish them from regular users. We find that bullies post less, participate in fewer online communities, and are less popular than normal users. Aggressors are relatively popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, with over 90 AUC.", "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1 over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7 , while keeping the false positive rate (FP) at the same level as the baseline at about 3.77 . Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "Over the past few years, online bullying and aggression have become increasingly prominent, and manifested in many different forms on social media. However, there is little work analyzing the characteristics of abusive users and what distinguishes them from typical social media users. In this paper, we start addressing this gap by analyzing tweets containing a great amount of abusiveness. We focus on a Twitter dataset revolving around the Gamergate controversy, which led to many incidents of cyberbullying and cyberaggression on various gaming and social media platforms. We study the properties of the users tweeting about Gamergate, the content they post, and the differences in their behavior compared to typical Twitter users. We find that while their tweets are often seemingly about aggressive and hateful subjects, \"Gamergaters\" do not exhibit common expressions of online anger, and in fact primarily differ from typical users in that their tweets are less joyful. They are also more engaged than typical Twitter users, which is an indication as to how and why this controversy is still ongoing. Surprisingly, we find that Gamergaters are less likely to be suspended by Twitter, thus we analyze their properties to identify differences from typical users and what may have led to their suspension. We perform an unsupervised machine learning analysis to detect clusters of users who, though currently active, could be considered for suspension since they exhibit similar behaviors with suspended users. Finally, we confirm the usefulness of our analyzed features by emulating the Twitter suspension mechanism with a supervised learning method, achieving very good precision and recall." ] }
1907.08985
2962953210
Real-time Deep Neural Network (DNN) inference with low-latency requirement has become increasingly important for numerous applications in both cloud computing (e.g., Apple's Siri) and edge computing (e.g., Google Waymo's driverless car). FPGA-based DNN accelerators have demonstrated both superior flexibility and performance; in addition, for real-time inference with low batch size, FPGA is expected to achieve further performance improvement. However, the performance gain from the single-FPGA design is obstructed by the limited on-chip resource. In this paper, we employ multiple FPGAs to cooperatively run DNNs with the objective of achieving super-linear speed-up against single-FPGA design. In implementing such systems, we found two barriers that hinder us from achieving the design goal: (1) the lack of a clear partition scheme for each DNN layer to fully exploit parallelism, and (2) the insufficient bandwidth between the off-chip memory and the accelerator due to the growing size of DNNs. To tackle these issues, we propose a general framework, "Super-LIP", which can support different kinds of DNNs. In this paper, we take Convolutional Neural Network (CNN) as a vehicle to illustrate Super-LIP. We first formulate an accurate system-level model to support the exploration of best partition schemes. Then, we develop a novel design methodology to effectively alleviate the heavy loads on memory bandwidth by moving traffic from memory bus to inter-FPGA links. We implement Super-LIP based on ZCU102 FPGA boards. Results demonstrate that Super-LIP with 2 FPGAs can achieve 3.48x speedup, compared to the state-of-the-art single-FPGA design. What is more, as the number of FPGAs scales up, the system latency can be further reduced while maintaining high energy efficiency.
The development of FPGA-based DNNs accelerator evolves in three stages. At the early stage @cite_4 @cite_30 @cite_19 @cite_5 @cite_21 @cite_33 , the whole FPGA is designed as one accelerator, and a controller iteratively moves data from off-chip DRAM to the accelerator to be executed. In the second stage, it is observed that the computation resource cannot be fully utilized with the one-size accelerator due to the varied computation and memory requirements in DNN layers. To overcome this shortage, multiple accelerators are integrated into one FPGA @cite_24 @cite_25 @cite_12 . However, the restrict resource on one board still limits the performance boosting of DNNs on FPGAs.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_21", "@cite_24", "@cite_19", "@cite_5", "@cite_25", "@cite_12" ], "mid": [ "2899915146", "2583383421", "2762910930", "2890068895", "2912949188", "2131595909", "2294282016", "2789246071", "2475840367" ], "abstract": [ "Building a high-performance EPGA accelerator for Deep Neural Networks (DNNs) often requires RTL programming, hardware verification, and precise resource allocation, all of which can be time-consuming and challenging to perform even for seasoned FPGA developers. To bridge the gap between fast DNN construction in software (e.g., Caffe, TensorFlow) and slow hardware implementation, we propose DNNBuilder for building high-performance DNN hardware accelerators on FPGAs automatically. Novel techniques are developed to meet the throughput and latency requirements for both cloud- and edge-devices. A number of novel techniques including high-quality RTL neural network components, a fine-grained layer-based pipeline architecture, and a column-based cache scheme are developed to boost throughput, reduce latency, and save FPGA on-chip memory. To address the limited resource challenge, we design an automatic design space exploration tool to generate optimized parallelism guidelines by considering external memory access bandwidth, data reuse behaviors, FPGA resource availability, and DNN complexity. DNNBuilder is demonstrated on four DNNs (Alexnet, ZF, VGG16, and YOLO) on two FPGAs (XC7Z045 and KU115) corresponding to the edge- and cloud-computing, respectively. The fine-grained layer-based pipeline architecture and the column-based cache scheme contribute to 7.7x and 43x reduction of the latency and BRAM utilization compared to conventional designs. We achieve the best performance (up to 5.15x faster) and efficiency (up to 5.88x more efficient) compared to published FPGA-based classification-oriented DNN accelerators for both edge and cloud computing cases. We reach 4218 GOPS for running object detection DNN which is the highest throughput reported to the best of our knowledge. DNNBuilder can provide millisecond-scale real-time performance for processing HD video input and deliver higher efficiency (up to 4.35x) than the GPU-based solutions.", "Current-generation Deep Neural Networks (DNNs), such as AlexNet and VGG, rely heavily on dense floating-point matrix multiplication (GEMM), which maps well to GPUs (regular parallelism, high TFLOP s). Because of this, GPUs are widely used for accelerating DNNs. Current FPGAs offer superior energy efficiency (Ops Watt), but they do not offer the performance of today's GPUs on DNNs. In this paper, we look at upcoming FPGA technology advances, the rapid pace of innovation in DNN algorithms, and consider whether future high-performance FPGAs will outperform GPUs for next-generation DNNs. The upcoming Intel® 14-nm Stratix? 10 FPGAs will have thousands of hard floating-point units (DSPs) and on-chip RAMs (M20K memory blocks). They will also have high bandwidth memories (HBMs) and improved frequency (HyperFlex? core architecture). This combination of features brings FPGA raw floating point performance within striking distance of GPUs. Meanwhile, DNNs are quickly evolving. For example, recent innovations that exploit sparsity (e.g., pruning) and compact data types (e.g., 1-2 bit) result in major leaps in algorithmic efficiency. However, these innovations introduce irregular parallelism on custom data types, which are difficult for GPUs to handle but would be a great fit for FPGA's extreme customizability. This paper evaluates a selection of emerging DNN algorithms on two generations of Intel FPGAs (Arria'10, Stratix'10) against the latest highest performance Titan X Pascal GPU. We created a customizable DNN accelerator template for FPGAs and used it in our evaluations. First, we study various GEMM operations for next-generation DNNs. Our results show that Stratix 10 FPGA is 10 , 50 , and 5.4x better in performance (TOP sec) than Titan X Pascal GPU on GEMM operations for pruned, Int6, and binarized DNNs, respectively. Then, we present a detailed case study on accelerating Ternary ResNet which relies on sparse GEMM on 2-bit weights (i.e., weights constrained to 0,+1,-1) and full-precision neurons. The Ternary ResNet accuracy is within 1 of the full-precision ResNet which won the 2015 ImageNet competition. On Ternary-ResNet, the Stratix 10 FPGA can deliver 60 better performance over Titan X Pascal GPU, while being 2.3x better in performance watt. Our results indicate that FPGAs may become the platform of choice for accelerating next-generation DNNs.", "FPGA is a promising candidate for the acceleration of Deep Neural Networks (DNN) with improved latency and energy consumption compared to CPU and GPU-based implementations. DNNs use sequences of layers of regular computation that are well suited for HLS-based design for FPGA. However, optimizing large neural networks under resource constraints is still a key challenge. HLS must manage on-chip computation, buffering resources, and off-chip memory accesses to minimize the total latency. In this paper, we present a design framework for DNNs that uses highly configurable IPs for neural network layers together with a new design space exploration engine for Resource Allocation Management (REALM). We also carry out efficient memory subsystem design and fixed-point weight re-training to further improve our FPGA solution. We demonstrate our design framework on the Long-term Recurrent Convolution Network for video inputs. Our implementation on a Xilinx VC709 board achieves 3.1X speedup compared to an NVIDIA K80 and 4.75X speedup compared to an Intel Xeon with 17.5X lower energy per image.", "FPGA-based CNN accelerators have advantages in flexibility and power efficiency and so are being deployed by a number of cloud computing service providers, including Microsoft, Amazon, Tencent, and Alibaba. Given the increasing complexity of neural networks, however, it is becoming challenging to efficiently map CNNs to multi-FPGA platforms. In this work, we present a scalable framework, FPDeep, which helps engineers map a specific CNN's training logic to a multi-FPGA cluster or cloud and to build RTL implementations for the target network. With FPDeep, multi-FPGA accelerators work in a deeply-pipelined manner using a simple 1-D topology; this enables the accelerators to map directly onto many existing platforms, including Catapult, Catapult2, and almost any tightly-coupled FPGA cluster. FPDeep uses two mechanisms to facilitate high-performance and energy-efficiency. First, FPDeep provides a strategy to balance workload among FPGAs, leading to improved utilization. Second, training of CNNs is executed in a fine-grained inter- and intra-layer pipelined manner, minimizing the time that features need to remain available while waiting for back-propagation. This reduces the storage demand to where only on-chip memory is required for convolution layers. Experiments show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. Using six transceivers per FPGA, FPDeep shows linearity up to 60 FPGAs. We evaluate energy efficiency in GOPs J and find that FPDeep provides up to 3.4 times higher energy efficiency than the Tesla K80 GPU.", "The recently reported successes of convolutional neural networks (CNNs) in many areas has generated wide interest in the development of FPGA-based accelerators. To achieve high performance and energy efficiency, an FPGA-based accelerator must fully utilize the limited computation resources and minimize the data communication and memory access, both of which are impacted and constrained by a variety of design parameters, e.g. the degree and dimension of parallelism, the size of on-chip buffers, the bandwidth of the external memory, and many more. The large design space of the accelerator makes it impractical to search for the optimal design in the implementation phase. To address this problem, a performance model is described to estimate the performance and resource utilization of an FPGA implementation. By this means, the performance bottleneck and design bound can be identified and the optimal design option can be explored early in the design phase. The proposed performance model is validated using a variety of CNN algorithms comparing the results with on-board test results on two different FPGAs.", "In this paper, we introduce a novel FPGA-based methodology for accelerating SQL queries using dynamic partial reconfiguration. Query acceleration is of utmost importance in large database systems to achieve a very high throughput. Although common FPGA-based accelerators are suitable to achieve such a high throughput, their design is hard to extend for new operations. Using partial dynamic reconfiguration, we are able to build more flexible architectures which can be extended to new operations or SQL constructs with a very low area overhead on the FPGA. Furthermore, the reconfiguration of a few FPGA frames can be used to switch very fast from one query to the next. In our approach, an SQL query is transformed into a hardware pipeline consisting of partially reconfigurable modules. The assembly of the (FPGA) data path is done at run-time using a static system providing the stream-based communication interfaces to the partial modules and the database management system. More specifically, each incoming SQL query is analyzed and divided into single operations which are subsequently mapped onto library modules and the composed data path loaded on the FPGA. We show that our approach is able to achieve a substantially higher throughput compared to a software-only solution.", "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute- memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "Three-dimensional convolutional neural networks (3D CNNs) are used efficiently in many computer vision applications. Most previous work in this area has concentrated only on designing and optimizing accelerators for 2D CNN, with few attempts made to accelerate 3D CNN on FPGA. We find accelerating 3D CNNs on FPGA to be challenge due to their high computational complexity and storage demands. More importantly, although the computation patterns of 2D and 3D CNNs are analogous, the conventional approaches adopted for accelerating 2D CNNs may be unfit for 3D CNN acceleration. In this paper, in order to accelerate 2D and 3D CNNs using a uniform framework, we propose a uniform template-based architecture that uses templates based on the Winograd algorithm to ensure fast development of 2D and 3D CNN accelerators. Furthermore, we also develop a uniform analytical model to facilitate efficient design space explorations of 2D and 3D CNN accelerators based on our architecture. Finally, we demonstrate the effectiveness of the template-based architecture by implementing accelerators for real-life 2D and 3D CNNs (VGG16 and C3D) on multiple FPGA platforms. On S2C VUS440, we achieve up to 1.13 TOPS and 1.11 TOPS under low resource utilization for VGG16 and C3D, respectively. End-to-end comparisons with CPU and GPU solutions demonstrate that our implementation of C3D achieves gains of up to 13x and 60x in performance and energy relative to a CPU solution, and a 6.4x energy efficiency gain over a GPU solution.", "Recently, FPGA-based CNN accelerators have demonstrated superior energy efficiency compared to high-performance devices like GPGPUs. However, due to the constrained on-chip resource and many other factors, single-board FPGA designs may have difficulties in achieving optimal energy efficiency. In this paper we present a deeply pipelined multi-FPGA architecture that expands the design space for optimal performance and energy efficiency. A dynamic programming algorithm is proposed to map the CNN computing layers efficiently to different FPGA boards. To demonstrate the potential of the architecture, we built a prototype system with seven FPGA boards connected with high-speed serial links. The experimental results on AlexNet and VGG-16 show that the prototype can achieve up to 21x and 2x energy efficiency compared to optimized multi-core CPU and GPU implementations, respectively." ] }
1907.08985
2962953210
Real-time Deep Neural Network (DNN) inference with low-latency requirement has become increasingly important for numerous applications in both cloud computing (e.g., Apple's Siri) and edge computing (e.g., Google Waymo's driverless car). FPGA-based DNN accelerators have demonstrated both superior flexibility and performance; in addition, for real-time inference with low batch size, FPGA is expected to achieve further performance improvement. However, the performance gain from the single-FPGA design is obstructed by the limited on-chip resource. In this paper, we employ multiple FPGAs to cooperatively run DNNs with the objective of achieving super-linear speed-up against single-FPGA design. In implementing such systems, we found two barriers that hinder us from achieving the design goal: (1) the lack of a clear partition scheme for each DNN layer to fully exploit parallelism, and (2) the insufficient bandwidth between the off-chip memory and the accelerator due to the growing size of DNNs. To tackle these issues, we propose a general framework, "Super-LIP", which can support different kinds of DNNs. In this paper, we take Convolutional Neural Network (CNN) as a vehicle to illustrate Super-LIP. We first formulate an accurate system-level model to support the exploration of best partition schemes. Then, we develop a novel design methodology to effectively alleviate the heavy loads on memory bandwidth by moving traffic from memory bus to inter-FPGA links. We implement Super-LIP based on ZCU102 FPGA boards. Results demonstrate that Super-LIP with 2 FPGAs can achieve 3.48x speedup, compared to the state-of-the-art single-FPGA design. What is more, as the number of FPGAs scales up, the system latency can be further reduced while maintaining high energy efficiency.
Most recently, with the growing demand in time performance, it is a trend to employ a cluster of FPGAs to execute DNNs @cite_28 @cite_20 @cite_6 @cite_15 @cite_26 @cite_17 @cite_9 @cite_7 . In @cite_28 @cite_15 , authors construct multiple FPGAs as a pipeline to execute a set of input images in a pipeline fashion. In @cite_20 , authors split the CNN layers to balance pipeline stages for higher throughput and lower cost. Authors in @cite_6 employ multiple FPGAs for the training phase. In @cite_26 @cite_17 , multi-FPGA platforms are utilized to accelerate the lung nodule segmentation. All the above works target on improving throughput by using a pipeline of FPGAs, which can achieve high throughput but make sacrifices on latency.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_28", "@cite_9", "@cite_6", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2899915146", "2944950984", "2916975147", "2520083297", "2890068895", "2294282016", "2466675884", "2583383421" ], "abstract": [ "Building a high-performance EPGA accelerator for Deep Neural Networks (DNNs) often requires RTL programming, hardware verification, and precise resource allocation, all of which can be time-consuming and challenging to perform even for seasoned FPGA developers. To bridge the gap between fast DNN construction in software (e.g., Caffe, TensorFlow) and slow hardware implementation, we propose DNNBuilder for building high-performance DNN hardware accelerators on FPGAs automatically. Novel techniques are developed to meet the throughput and latency requirements for both cloud- and edge-devices. A number of novel techniques including high-quality RTL neural network components, a fine-grained layer-based pipeline architecture, and a column-based cache scheme are developed to boost throughput, reduce latency, and save FPGA on-chip memory. To address the limited resource challenge, we design an automatic design space exploration tool to generate optimized parallelism guidelines by considering external memory access bandwidth, data reuse behaviors, FPGA resource availability, and DNN complexity. DNNBuilder is demonstrated on four DNNs (Alexnet, ZF, VGG16, and YOLO) on two FPGAs (XC7Z045 and KU115) corresponding to the edge- and cloud-computing, respectively. The fine-grained layer-based pipeline architecture and the column-based cache scheme contribute to 7.7x and 43x reduction of the latency and BRAM utilization compared to conventional designs. We achieve the best performance (up to 5.15x faster) and efficiency (up to 5.88x more efficient) compared to published FPGA-based classification-oriented DNN accelerators for both edge and cloud computing cases. We reach 4218 GOPS for running object detection DNN which is the highest throughput reported to the best of our knowledge. DNNBuilder can provide millisecond-scale real-time performance for processing HD video input and deliver higher efficiency (up to 4.35x) than the GPU-based solutions.", "Three-dimensional convolutional neural networks (3D CNNs) have become a promising method in lung nodule segmentation. The high computational complexity and memory requirements of 3D CNNs make it challenging to accelerate 3D CNNs on a single FPGA. In this work, we focus on accelerating the 3D CNN-based lung nodule segmentation on a multi-FPGA platform by proposing an efficient mapping scheme that takes advantage of the massive parallelism provided by the platform, as well as maximizing the computational efficiency of the accelerators. Experimental results show that our system integrating with four Xilinx VCU118 can achieve state-of-the-art performance of 14.5 TOPS, in addition with a 29. 4x performance gain over CPU and 10. 5x more energy efficiency over GPU.CCS Concepts • Computer systems organization @math Special purpose systems.", "Lung nodule segmentation is one of the most significant steps in many Computer Aided Detection (CAD) systems used for lung nodule identification and classification. Three-dimensional convolutional neural networks (3D CNNs) have become a promising method in lung nodule segmentation, as this method can achieve higher detection accuracy than conventional methods. It has been proven that FPGAs can provide the most energy-efficient solution for CNN acceleration. However, the high computational complexity and memory requirements of 3D CNNs make it challenging to accelerate 3D CNNs on a single FPGA, as this will further bottleneck the performance of a 3D CNN-based CAD system. Accordingly, in this work, we focus on accelerating the 3D CNN-based lung nodule segmentation on a multi-FPGA platform by proposing an efficient mapping scheme that takes advantage of the massive parallelism provided by the platform, as well as maximizing the computational efficiency of the accelerators. Experimental results show that our system is able to achieve high computational efficiency and thereby a state-of-the-art performance of 14.5 TOPS at 200 MHz. Comparisons with CPU and GPU solutions demonstrate that our system achieves a 29.4x performance gain over CPU and a 10.5x energy efficiency improvement over GPU.", "With the recent advancement of multilayer convolutional neural networks (CNN), deep learning has achieved amazing success in many areas, especially in visual content understanding and classification. To improve the performance and energy-efficiency of the computation-demanding CNN, the FPGA-based acceleration emerges as one of the most attractive alternatives. In this paper we design and implement Caffeine, a hardware software co-designed library to efficiently accelerate the entire CNN on FPGAs. First, we propose a uniformed convolutional matrix-multiplication representation for both computation-intensive convolutional layers and communication-intensive fully connected (FCN) layers. Second, we design Caffeine with the goal to maximize the underlying FPGA computing and bandwidth resource utilization, with a key focus on the bandwidth optimization by the memory access reorganization not studied in prior work. Moreover, we implement Caffeine in the portable high-level synthesis and provide various hardware software definable parameters for user configurations. Finally, we also integrate Caffeine into the industry-standard software deep learning framework Caffe. We evaluate Caffeine and its integration with Caffe by implementing VGG16 and AlexNet network on multiple FPGA platforms. Caffeine achieves a peak performance of 365 GOPS on Xilinx KU060 FPGA and 636 GOPS on Virtex7 690t FPGA. This is the best published result to our best knowledge. We achieve more than 100x speedup on FCN layers over previous FPGA accelerators. An end-to-end evaluation with Caffe integration shows up to 7.3x and 43.5x performance and energy gains over Caffe on a 12-core Xeon server, and 1.5x better energy-efficiency over the GPU implementation on a medium-sized FPGA (KU060). Performance projections to a system with a high-end FPGA (Virtex7 690t) shows even higher gains.", "FPGA-based CNN accelerators have advantages in flexibility and power efficiency and so are being deployed by a number of cloud computing service providers, including Microsoft, Amazon, Tencent, and Alibaba. Given the increasing complexity of neural networks, however, it is becoming challenging to efficiently map CNNs to multi-FPGA platforms. In this work, we present a scalable framework, FPDeep, which helps engineers map a specific CNN's training logic to a multi-FPGA cluster or cloud and to build RTL implementations for the target network. With FPDeep, multi-FPGA accelerators work in a deeply-pipelined manner using a simple 1-D topology; this enables the accelerators to map directly onto many existing platforms, including Catapult, Catapult2, and almost any tightly-coupled FPGA cluster. FPDeep uses two mechanisms to facilitate high-performance and energy-efficiency. First, FPDeep provides a strategy to balance workload among FPGAs, leading to improved utilization. Second, training of CNNs is executed in a fine-grained inter- and intra-layer pipelined manner, minimizing the time that features need to remain available while waiting for back-propagation. This reduces the storage demand to where only on-chip memory is required for convolution layers. Experiments show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. Using six transceivers per FPGA, FPDeep shows linearity up to 60 FPGAs. We evaluate energy efficiency in GOPs J and find that FPDeep provides up to 3.4 times higher energy efficiency than the Tesla K80 GPU.", "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute- memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "Convolutional neural networks (CNNs) are revolutionizing machine learning, but they present significant computational challenges. Recently, many FPGA-based accelerators have been proposed to improve the performance and efficiency of CNNs. Current approaches construct a single processor that computes the CNN layers one at a time; the processor is optimized to maximize the throughput at which the collection of layers is computed. However, this approach leads to inefficient designs because the same processor structure is used to compute CNN layers of radically varying dimensions. We present a new CNN accelerator paradigm and an accompanying automated design methodology that partitions the available FPGA resources into multiple processors, each of which is tailored for a different subset of the CNN convolutional layers. Using the same FPGA resources as a single large processor, multiple smaller specialized processors increase computational efficiency and lead to a higher overall throughput. Our design methodology achieves 3.8x higher throughput than the state-of-the-art approach on evaluating the popular AlexNet CNN on a Xilinx Virtex-7 FPGA. For the more recent SqueezeNet and GoogLeNet, the speedups are 2.2x and 2.0x.", "Current-generation Deep Neural Networks (DNNs), such as AlexNet and VGG, rely heavily on dense floating-point matrix multiplication (GEMM), which maps well to GPUs (regular parallelism, high TFLOP s). Because of this, GPUs are widely used for accelerating DNNs. Current FPGAs offer superior energy efficiency (Ops Watt), but they do not offer the performance of today's GPUs on DNNs. In this paper, we look at upcoming FPGA technology advances, the rapid pace of innovation in DNN algorithms, and consider whether future high-performance FPGAs will outperform GPUs for next-generation DNNs. The upcoming Intel® 14-nm Stratix? 10 FPGAs will have thousands of hard floating-point units (DSPs) and on-chip RAMs (M20K memory blocks). They will also have high bandwidth memories (HBMs) and improved frequency (HyperFlex? core architecture). This combination of features brings FPGA raw floating point performance within striking distance of GPUs. Meanwhile, DNNs are quickly evolving. For example, recent innovations that exploit sparsity (e.g., pruning) and compact data types (e.g., 1-2 bit) result in major leaps in algorithmic efficiency. However, these innovations introduce irregular parallelism on custom data types, which are difficult for GPUs to handle but would be a great fit for FPGA's extreme customizability. This paper evaluates a selection of emerging DNN algorithms on two generations of Intel FPGAs (Arria'10, Stratix'10) against the latest highest performance Titan X Pascal GPU. We created a customizable DNN accelerator template for FPGAs and used it in our evaluations. First, we study various GEMM operations for next-generation DNNs. Our results show that Stratix 10 FPGA is 10 , 50 , and 5.4x better in performance (TOP sec) than Titan X Pascal GPU on GEMM operations for pruned, Int6, and binarized DNNs, respectively. Then, we present a detailed case study on accelerating Ternary ResNet which relies on sparse GEMM on 2-bit weights (i.e., weights constrained to 0,+1,-1) and full-precision neurons. The Ternary ResNet accuracy is within 1 of the full-precision ResNet which won the 2015 ImageNet competition. On Ternary-ResNet, the Stratix 10 FPGA can deliver 60 better performance over Titan X Pascal GPU, while being 2.3x better in performance watt. Our results indicate that FPGAs may become the platform of choice for accelerating next-generation DNNs." ] }
1907.08985
2962953210
Real-time Deep Neural Network (DNN) inference with low-latency requirement has become increasingly important for numerous applications in both cloud computing (e.g., Apple's Siri) and edge computing (e.g., Google Waymo's driverless car). FPGA-based DNN accelerators have demonstrated both superior flexibility and performance; in addition, for real-time inference with low batch size, FPGA is expected to achieve further performance improvement. However, the performance gain from the single-FPGA design is obstructed by the limited on-chip resource. In this paper, we employ multiple FPGAs to cooperatively run DNNs with the objective of achieving super-linear speed-up against single-FPGA design. In implementing such systems, we found two barriers that hinder us from achieving the design goal: (1) the lack of a clear partition scheme for each DNN layer to fully exploit parallelism, and (2) the insufficient bandwidth between the off-chip memory and the accelerator due to the growing size of DNNs. To tackle these issues, we propose a general framework, "Super-LIP", which can support different kinds of DNNs. In this paper, we take Convolutional Neural Network (CNN) as a vehicle to illustrate Super-LIP. We first formulate an accurate system-level model to support the exploration of best partition schemes. Then, we develop a novel design methodology to effectively alleviate the heavy loads on memory bandwidth by moving traffic from memory bus to inter-FPGA links. We implement Super-LIP based on ZCU102 FPGA boards. Results demonstrate that Super-LIP with 2 FPGAs can achieve 3.48x speedup, compared to the state-of-the-art single-FPGA design. What is more, as the number of FPGAs scales up, the system latency can be further reduced while maintaining high energy efficiency.
To satisfy the low latency requirement for real-time DNN inference, Microsoft in Brainwave @cite_10 @cite_34 devise techniques to pin weights on different FPGAs. Such an approach can work well for RNNs with small intermediate data, but awkward for CNN implementations due to the large intermediate data and complicated data reuse pattern. Kindly note that in @cite_10 @cite_34 , authors use only one FPGA for CNNs, whose input image has low resolution that hides the bandwidth bottleneck issue. However, for more realistic CNN applications with high resolution, like medical images, it is still unknown how to achieve real-time inference with ultra-low latency using multiple FPGAs. Super-LIP is proposed to fill this gap.
{ "cite_N": [ "@cite_34", "@cite_10" ], "mid": [ "2294282016", "2883929540" ], "abstract": [ "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute- memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "Interactive AI-powered services require low-latency evaluation of deep neural network (DNN) models—aka \"\"real-time AI\"\". The growing demand for computationally expensive, state-of-the-art DNNs, coupled with diminishing performance gains of general-purpose architectures, has fueled an explosion of specialized Neural Processing Units (NPUs). NPUs for interactive services should satisfy two requirements: (1) execution of DNN models with low latency, high throughput, and high efficiency, and (2) flexibility to accommodate evolving state-of-the-art models (e.g., RNNs, CNNs, MLPs) without costly silicon updates. This paper describes the NPU architecture for Project Brainwave, a production-scale system for real-time AI. The Brainwave NPU achieves more than an order of magnitude improvement in latency and throughput over state-of-the-art GPUs on large RNNs at a batch size of 1. The NPU attains this performance using a single-threaded SIMD ISA paired with a distributed microarchitecture capable of dispatching over 7M operations from a single instruction. The spatially distributed microarchitecture, scaled up to 96,000 multiply-accumulate units, is supported by hierarchical instruction decoders and schedulers coupled with thousands of independently addressable high-bandwidth on-chip memories, and can transparently exploit many levels of fine-grain SIMD parallelism. When targeting an FPGA, microarchitectural parameters such as native datapaths and numerical precision can be \"synthesis specialized\" to models at compile time, enabling atypically high FPGA performance competitive with hardened NPUs. When running on an Intel Stratix 10 280 FPGA, the Brainwave NPU achieves performance ranging from ten to over thirty-five teraflops, with no batching, on large, memory-intensive RNNs." ] }
1907.08985
2962953210
Real-time Deep Neural Network (DNN) inference with low-latency requirement has become increasingly important for numerous applications in both cloud computing (e.g., Apple's Siri) and edge computing (e.g., Google Waymo's driverless car). FPGA-based DNN accelerators have demonstrated both superior flexibility and performance; in addition, for real-time inference with low batch size, FPGA is expected to achieve further performance improvement. However, the performance gain from the single-FPGA design is obstructed by the limited on-chip resource. In this paper, we employ multiple FPGAs to cooperatively run DNNs with the objective of achieving super-linear speed-up against single-FPGA design. In implementing such systems, we found two barriers that hinder us from achieving the design goal: (1) the lack of a clear partition scheme for each DNN layer to fully exploit parallelism, and (2) the insufficient bandwidth between the off-chip memory and the accelerator due to the growing size of DNNs. To tackle these issues, we propose a general framework, "Super-LIP", which can support different kinds of DNNs. In this paper, we take Convolutional Neural Network (CNN) as a vehicle to illustrate Super-LIP. We first formulate an accurate system-level model to support the exploration of best partition schemes. Then, we develop a novel design methodology to effectively alleviate the heavy loads on memory bandwidth by moving traffic from memory bus to inter-FPGA links. We implement Super-LIP based on ZCU102 FPGA boards. Results demonstrate that Super-LIP with 2 FPGAs can achieve 3.48x speedup, compared to the state-of-the-art single-FPGA design. What is more, as the number of FPGAs scales up, the system latency can be further reduced while maintaining high energy efficiency.
Another branch of related work is to deploy CNNs on multi-core mobile devices or multi-processor system on-chip (MPSoC) @cite_31 @cite_27 @cite_13 @cite_29 @cite_3 @cite_14 . Unlike FPGA-based implementation that requires designers to determine the designs of communication and computation sub-systems, processing elements in these systems use fixed designs (e.g., CPUs, GPUs). In consequence, the optimization problem on such systems is how to run tasks to computation components in parallel, without considering how to tailor hardware designs.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_3", "@cite_27", "@cite_31", "@cite_13" ], "mid": [ "2912949188", "2294282016", "2613168994", "2584311934", "2764337919", "2475840367" ], "abstract": [ "The recently reported successes of convolutional neural networks (CNNs) in many areas has generated wide interest in the development of FPGA-based accelerators. To achieve high performance and energy efficiency, an FPGA-based accelerator must fully utilize the limited computation resources and minimize the data communication and memory access, both of which are impacted and constrained by a variety of design parameters, e.g. the degree and dimension of parallelism, the size of on-chip buffers, the bandwidth of the external memory, and many more. The large design space of the accelerator makes it impractical to search for the optimal design in the implementation phase. To address this problem, a performance model is described to estimate the performance and resource utilization of an FPGA implementation. By this means, the performance bottleneck and design bound can be identified and the optimal design option can be explored early in the design phase. The proposed performance model is validated using a variety of CNN algorithms comparing the results with on-board test results on two different FPGAs.", "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute- memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "Accelerating Convolutional Neural Networks (CNNs) on GPUs usually involves two stages: training and inference. Traditionally, this two-stage process is deployed on high-end GPU-equipped servers. Driven by the increase in compute power of desktop and mobile GPUs, there is growing interest in performing inference on various kinds of platforms. In contrast to the requirements of high throughput and accuracy during the training stage, end-users will face diverse requirements related to inference tasks. To address this emerging trend and new requirements, we propose Pervasive CNN (P-CNN), a user satisfaction-aware CNN inference framework. P-CNN is composed of two phases: cross-platform offline compilation and run-time management. Based on users' requirements, offline compilation generates the optimal kernel using architecture-independent techniques, such as adaptive batch size selection and coordinated fine-tuning. The runtime management phase consists of accuracy tuning, execution, and calibration. First, accuracy tuning dynamically identifies the fastest kernels with acceptable accuracy. Next, the run-time kernel scheduler partitions the optimal computing resource for each layer and schedules the GPU thread blocks. If its accuracy is not acceptable to the end-user, the calibration stage selects a slower but more precise kernel to improve the accuracy. Finally, we design a user satisfaction metric for CNNs to evaluate ourPervasive deign. Our evaluation results show P-CNN can provide the best user satisfaction for different inference tasks.", "As convolution layers contribute most operations in convolutional neural network (CNN) algorithms, an effective convolution acceleration scheme significantly affects the efficiency and performance of a hardware CNN accelerator. Convolution in CNNs involves three-dimensional multiply and accumulate (MAC) operations with four levels of loops, which results in a large design space. Prior works either employ limited loop optimization techniques, e.g. loop unrolling, tiling and interchange, or only tune some of the design variables after the accelerator architecture and dataflow are already fixed. Without fully studying the convolution loop optimization before the hardware design phase, the resulting accelerator can hardly exploit the data reuse and manage data movement efficiently. This work overcomes these barriers by quantitatively analyzing and optimizing the design objectives (e.g. required memory access) of the CNN accelerator based on multiple design variables. We systematically explore the trade-offs of hardware cost by searching the design variable configurations, and propose a specific dataflow of hardware CNN acceleration to minimize the memory access and data movement while maximizing the resource utilization to achieve high performance. The proposed CNN acceleration scheme and architecture are demonstrated on a standalone Altera Arria 10 GX 1150 FPGA by implementing end-to-end VGG-16 CNN model and achieved 645.25 GOPS of throughput and 47.97 ms of latency, which is a >3.2× enhancement compared to state-of-the-art FPGA implementations of VGG model.", "Convolutional neural networks (CNNs) are deployed in a wide range of image recognition, scene segmentation and object detection applications. Achieving state of the art accuracy in CNNs often results in large models and complex topologies that require significant compute resources to complete in a timely manner. Binarised neural networks (BNNs) have been proposed as an optimised variant of CNNs, which constrain the weights and activations to +1 or —1 and thus offer compact models and lower computational complexity per operation. This paper presents a high performance BNN accelerator on the Intel®Xeon+FPGA™ platform. The proposed accelerator is designed to take advantage of the Xeon+FPGA system in a way that a specialised FPGA architecture can be targeted for the most compute intensive parts of the BNN whilst other parts of the topology can be handled by the Xeon™ CPU. The implementation is evaluated by comparing the raw compute performance and energy efficiency for key layers in standard CNN topologies against an Nvidia Titan X Pascal GPU and other published FPGA BNN accelerators. The results show that our single-package integrated Arria™ 10 FPGA accelerator coupled with a high-end Xeon CPU can offer comparable performance and better energy efficiency than a high-end discrete Titan X GPU card. In addition, our solution delivers the best performance compared to previous BNN FPGA implementations.", "Recently, FPGA-based CNN accelerators have demonstrated superior energy efficiency compared to high-performance devices like GPGPUs. However, due to the constrained on-chip resource and many other factors, single-board FPGA designs may have difficulties in achieving optimal energy efficiency. In this paper we present a deeply pipelined multi-FPGA architecture that expands the design space for optimal performance and energy efficiency. A dynamic programming algorithm is proposed to map the CNN computing layers efficiently to different FPGA boards. To demonstrate the potential of the architecture, we built a prototype system with seven FPGA boards connected with high-speed serial links. The experimental results on AlexNet and VGG-16 show that the prototype can achieve up to 21x and 2x energy efficiency compared to optimized multi-core CPU and GPU implementations, respectively." ] }
1907.08938
2964314696
An @math maximum distance separable (MDS) code has optimal repair access if the minimum number of symbols accessed from @math surviving nodes is achieved, where @math . Existing results show that the sub-packetization @math of an @math high code rate (i.e., @math ) MDS code with optimal repair access is at least @math . In this paper, we propose a class of multi-layer transformed MDS codes such that the sub-packetization is @math , where @math , and the repair access is optimal for any single node. We show that the sub-packetization of the proposed multi-layer transformed MDS codes is strictly less than the existing known lower bound when @math , achieving by restricting the choice of @math specific helper nodes in repairing a failed node. We further propose multi-layer transformed EVENODD codes that have optimal repair access for any single node and lower sub-packetization than the existing binary MDS array codes with optimal repair access for any single node. With our multi-layer transformation, we can design new MDS codes that have the properties of low computational complexity, optimal repair access for any single node, and relatively small sub-packetization, all of which are critical for maintaining the reliability of distributed storage systems.
Many constructions of MSR codes @cite_11 @cite_18 @cite_5 @cite_25 @cite_32 @cite_15 @cite_28 @cite_3 have been proposed in the literature. For example, product-matrix MSR codes @cite_11 support the parameters that satisfy @math , and are subsequently extended with lower computational complexity @cite_18 . Another construction of MSR codes is based on interference alignment @cite_5 . However, the above two constructions of MSR codes are only suitable for low code rates (i.e, @math ).
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_32", "@cite_3", "@cite_5", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2953340712", "2492240243", "1669634226", "1647846156", "2962865070", "2784871282", "2166011389", "2056826630" ], "abstract": [ "We present a high-rate @math -MSR code with a sub-packetization level that is polynomial in the dimension @math of the code. While polynomial sub-packetization level was achieved earlier for vector MDS codes that repair systematic nodes optimally, no such MSR code construction is known. In the low-rate regime (i. e., rates less than one-half), MSR code constructions with a linear sub-packetization level are available. But in the high-rate regime (i. e., rates greater than one-half), the known MSR code constructions required a sub-packetization level that is exponential in @math . In the present paper, we construct an MSR code for @math with a fixed rate @math achieveing a sub-packetization level @math . The code allows help-by-transfer repair, i. e., no computations are needed at the helper nodes during repair of a failed node.", "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.", "An [n, k] linear code C that is subject to locality constraints imposed by a parity check matrix H 0 is said to be a maximally recoverable (MR) code if it can recover from any erasure pattern that some k-dimensional subcode of the null space of H 0 can recover from. The focus in this paper is on MR codes constrained to have all-symbol locality r. Given that it is challenging to construct MR codes having small field size, we present results in two directions. In the first, we relax the MR constraint and require only that apart from the requirement of being an optimum all-symbol locality code, the code must yield an MDS code when punctured in a single, specific pattern which ensures that each local code is punctured in precisely one coordinate and that no two local codes share the same punctured coordinate. We term these codes as partially maximally recoverable (PMR) codes. We provide a simple construction for high-rate PMR codes and then provide a general, promising approach that needs further investigation. In the second direction, we present three constructions of MR codes with improved parameters, primarily the size of the finite field employed in the construction.", "We present a high-rate (n, k, d = n − 1)-MSR code with a sub-packetization level that is polynomial in the dimension k of the code. While polynomial sub-packetization level was achieved earlier for vector MDS codes that repair systematic nodes optimally, no such MSR code construction is known. In the low-rate regime (i. e., rates less than one-half), MSR code constructions with a linear sub-packetization level are available. But in the high-rate regime (i. e., rates greater than one-half), the known MSR code constructions required a sub-packetization level that is exponential in k. In the present paper, we construct an MSR code for d = n − 1 with a fixed rate equation, achieveing a sub-packetization level α = O(kt). The code allows help-by-transfer repair, i. e., no computations are needed at the helper nodes during repair of a failed node.", "In this paper, we revisit the problem of characterizing the secrecy capacity of minimum storage regenerating (MSR) codes under the passive @math -eavesdropper model, where the eavesdropper has access to data stored on @math nodes and the repair data for an additional @math nodes. We study it from the information-theoretic perspective. First, some general properties of MSR codes as well as a simple and generally applicable upper bound on secrecy capacity are given. Second, a new concept of stable MSR codes is introduced, where the stable property is shown to be closely linked with secrecy capacity. Finally, a comprehensive and explicit result on secrecy capacity in the linear MSR scenario is present, which generalizes all related works in the literature and also predicts certain results for some unexplored linear MSR codes.", "Designing good error correcting codes whose generator matrix has a support constraint, i.e., one for which only certain entries of the generator matrix are allowed to be non-zero, has found many recent applications, including in distributed coding and storage, multiple access networks, and weakly secure data exchange. The dual problem, where the parity check matrix has a support constraint, comes up in the design of locally repairable codes. The central problem here is to design codes with the largest possible minimum distance, subject to the given support constraint on the generator matrix. An upper bound on the minimum distance can be obtained through a set of singleton bounds, which can be alternatively thought of as a cut-set bound. Furthermore, it is well known that, if the field size is large enough, any random generator matrix obeying the support constraint will achieve the maximum minimum distance with high probability. Since random codes are not easy to decode, structured codes with efficient decoders, e.g., Reed-Solomon codes, are much more desirable. The GM-MDS conjecture of states that the maximum minimum distance over all codes satisfying the generator matrix support constraint can be obtained by a Reed Solomon code. If true, this would have significant consequences. The conjecture has been proven for several special case: when the dimension of the code k is less than or equal to five, when the number of distinct support sets on the rows of the generator matrix m, say, is less than or equal to three, or when the generator matrix is sparsest and balanced. In this paper, we report on further progress on the GM-MDS conjecture. In particular, we show that the conjecture is true for all m less than equal to six. This generalizes all previous known results (except for the sparsest and balanced case, which is a very special support constraint).", "A new family of maximum distance separable (MDS) array codes is presented. The code arrays contain p information columns and r independent parity columns, each column consisting of p-1 bits, where p is a prime. We extend a previously known construction for the case r=2 to three and more parity columns. It is shown that when r=3 such extension is possible for any prime p. For larger values of r, we give necessary and sufficient conditions for our codes to be MDS, and then prove that if p belongs to a certain class of primes these conditions are satisfied up to r spl les 8. One of the advantages of the new codes is that encoding and decoding may be accomplished using simple cyclic shifts and XOR operations on the columns of the code array. We develop efficient decoding procedures for the case of two- and three-column errors. This again extends the previously known results for the case of a single-column error. Another primary advantage of our codes is related to the problem of efficient information updates. We present upper and lower bounds on the average number of parity bits which have to be updated in an MDS code over GF (2 sup m ), following an update in a single information bit. This average number is of importance in many storage applications which require frequent updates of information. We show that the upper bound obtained from our codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.", "Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 ≥ 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d <; 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k + 1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d <; 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound." ] }
1907.08938
2964314696
An @math maximum distance separable (MDS) code has optimal repair access if the minimum number of symbols accessed from @math surviving nodes is achieved, where @math . Existing results show that the sub-packetization @math of an @math high code rate (i.e., @math ) MDS code with optimal repair access is at least @math . In this paper, we propose a class of multi-layer transformed MDS codes such that the sub-packetization is @math , where @math , and the repair access is optimal for any single node. We show that the sub-packetization of the proposed multi-layer transformed MDS codes is strictly less than the existing known lower bound when @math , achieving by restricting the choice of @math specific helper nodes in repairing a failed node. We further propose multi-layer transformed EVENODD codes that have optimal repair access for any single node and lower sub-packetization than the existing binary MDS array codes with optimal repair access for any single node. With our multi-layer transformation, we can design new MDS codes that have the properties of low computational complexity, optimal repair access for any single node, and relatively small sub-packetization, all of which are critical for maintaining the reliability of distributed storage systems.
MSR codes with high code rates (i.e, @math ) are important in practice. Some existing constructions of high-code-rate MSR codes are found in @cite_25 @cite_32 @cite_15 @cite_28 @cite_3 . It is shown in @cite_22 that a tight lower bound of the sub-packetization level of high-code-rate MSR codes with optimal repair access is @math . More generally, for any @math MDS code with optimal repair access for each of @math nodes (where @math ), the minimum sub-packetization level is @math .
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_32", "@cite_3", "@cite_15", "@cite_25" ], "mid": [ "2953340712", "1647846156", "2765728635", "2963333438", "2606114015", "2963781977" ], "abstract": [ "We present a high-rate @math -MSR code with a sub-packetization level that is polynomial in the dimension @math of the code. While polynomial sub-packetization level was achieved earlier for vector MDS codes that repair systematic nodes optimally, no such MSR code construction is known. In the low-rate regime (i. e., rates less than one-half), MSR code constructions with a linear sub-packetization level are available. But in the high-rate regime (i. e., rates greater than one-half), the known MSR code constructions required a sub-packetization level that is exponential in @math . In the present paper, we construct an MSR code for @math with a fixed rate @math achieveing a sub-packetization level @math . The code allows help-by-transfer repair, i. e., no computations are needed at the helper nodes during repair of a failed node.", "We present a high-rate (n, k, d = n − 1)-MSR code with a sub-packetization level that is polynomial in the dimension k of the code. While polynomial sub-packetization level was achieved earlier for vector MDS codes that repair systematic nodes optimally, no such MSR code construction is known. In the low-rate regime (i. e., rates less than one-half), MSR code constructions with a linear sub-packetization level are available. But in the high-rate regime (i. e., rates greater than one-half), the known MSR code constructions required a sub-packetization level that is exponential in k. In the present paper, we construct an MSR code for d = n − 1 with a fixed rate equation, achieveing a sub-packetization level α = O(kt). The code allows help-by-transfer repair, i. e., no computations are needed at the helper nodes during repair of a failed node.", "The first focus of the present paper, is on lower bounds on the sub-packetization level @math of an MSR code that is capable of carrying out repair in help-by-transfer fashion (also called optimal-access property). We prove here a lower bound on @math which is shown to be tight for the case @math by comparing with recent code constructions in the literature. We also extend our results to an @math MDS code over the vector alphabet. Our objective even here, is on lower bounds on the sub-packetization level @math of an MDS code that can carry out repair of any node in a subset of @math nodes, @math where each node is repaired (linear repair) by help-by-transfer with minimum repair bandwidth. We prove a lower bound on @math for the case of @math . This bound holds for any @math and is shown to be tight, again by comparing with recent code constructions in the literature. Also provided, are bounds for the case @math . We study the form of a vector MDS code having the property that we can repair failed nodes belonging to a fixed set of @math nodes with minimum repair bandwidth and in optimal-access fashion, and which achieve our lower bound on sub-packetization level @math . It turns out interestingly, that such a code must necessarily have a coupled-layer structure, similar to that of the Ye-Barg code.", "The first focus of the present paper, is on lower bounds on the sub-packetization level @math of an MSR code that is capable of carrying out repair in help-by-transfer fashion (also called optimal-access property). We prove here a lower bound on @math which is shown to be tight for the case @math by comparing with recent code constructions in the literature. We also extend our results to an @math MDS code over the vector alphabet. Our objective even here, is on lower bounds on the sub-packetization level @math of an MDS code that can carry out repair of any node in a subset of @math nodes, @math where each node is repaired (linear repair) by help-by-transfer with minimum repair bandwidth. We prove a lower bound on @math for the case of @math . This bound holds for any @math and is shown to be tight, again by comparing with recent code constructions in the literature. Also provided, are bounds for the case @math Q @math $ . It turns out interestingly, that such a code must necessarily have a coupled-layer structure, similar to that of the Ye-Barg code.", "Regenerating codes for distributed storage have attracted much research interest in the past decade. Such codes trade the bandwidth needed to repair a failed node with the overall amount of data stored in the network. Minimum storage regenerating (MSR) codes are an important class of optimal regenerating codes that minimize (first) the amount of data stored per node and (then) the repair bandwidth. Specifically, an @math - @math MSR code @math over @math stores a file @math consisting of @math symbols over @math among @math nodes, each storing @math symbols, in such a way that: 1) the file @math can be recovered by downloading the content of any @math of the @math nodes and 2) the content of any failed node can be reconstructed by accessing any @math of the remaining @math nodes and downloading @math symbols from each of these nodes. In practice, the file @math is typically available in uncoded form on some @math of the @math nodes, known as systematic nodes , and the defining node-repair condition above can be relaxed to requiring the optimal repair bandwidth for systematic nodes only . Such codes are called systematic–repair MSR codes . Unfortunately, finite– @math constructions of @math MSR codes are known only for certain special cases: either low rate, namely @math , or high repair connectivity, namely @math . Our main result in this paper is a finite– @math construction of systematic-repair @math MSR codes for all possible values of parameters @math . We also introduce a generalized construction for @math MSR codes to achieve the optimal repair bandwidth for all values of @math simultaneously.", "An @math maximum distance separable (MDS) array code of length @math , dimension @math , and sub-packetization @math is formed of @math matrices over a finite field @math , with every column of the matrix stored on a separate node in the distributed storage system and viewed as a coordinate of the codeword. Repair of a failed node (recovery of one erased column) can be performed by accessing a set of @math surviving (helper) nodes. The code is said to have the optimal access property if the amount of data accessed at each of the helper nodes meets a lower bound on this quantity. For optimal-access MDS codes with @math , the sub-packetization @math satisfies the bound @math . In our previous work (IEEE Trans. Inf. Theory, vol. 63, no. 4, 2017), for any @math and @math , we presented an explicit construction of optimal-access MDS codes with sub-packetization @math . In this paper, we take up the question of reducing the sub-packetization value @math to make it to approach the lower bound. We construct an explicit family of optimal-access codes with @math , which differs from the optimal value by at most a factor of @math . These codes can be constructed over any finite field @math as long as @math , and afford low-complexity encoding and decoding procedures. We also define a version of the repair problem that bridges the context of regenerating codes and codes with locality constraints (LRC codes), which we call group repair with optimal access . In this variation, we assume that the set of @math nodes is partitioned into @math repair groups of size @math , and require that the amount of accessed data for repair is the smallest possible whenever the @math helper nodes include all the other @math nodes from the same group as the failed node. For this problem, we construct a family of codes with the group optimal access property. These codes can be constructed over any field @math of size @math , and also afford low-complexity encoding and decoding procedures." ] }
1907.08938
2964314696
An @math maximum distance separable (MDS) code has optimal repair access if the minimum number of symbols accessed from @math surviving nodes is achieved, where @math . Existing results show that the sub-packetization @math of an @math high code rate (i.e., @math ) MDS code with optimal repair access is at least @math . In this paper, we propose a class of multi-layer transformed MDS codes such that the sub-packetization is @math , where @math , and the repair access is optimal for any single node. We show that the sub-packetization of the proposed multi-layer transformed MDS codes is strictly less than the existing known lower bound when @math , achieving by restricting the choice of @math specific helper nodes in repairing a failed node. We further propose multi-layer transformed EVENODD codes that have optimal repair access for any single node and lower sub-packetization than the existing binary MDS array codes with optimal repair access for any single node. With our multi-layer transformation, we can design new MDS codes that have the properties of low computational complexity, optimal repair access for any single node, and relatively small sub-packetization, all of which are critical for maintaining the reliability of distributed storage systems.
There are other practical concerns in distributed storage systems, such as how to mitigate the computational complexity. Binary MDS array codes are a special class of MDS codes that have low computational complexity, since the encoding and decoding procedures only involve XOR operations. Typical examples of binary MDS array codes are EVENODD @cite_8 @cite_6 , X-code @cite_12 and RDP @cite_33 @cite_10 . Some efficient decoding methods of binary MDS array codes are given in @cite_9 @cite_19 @cite_14 @cite_23 @cite_24 . There have been also many studies @cite_29 @cite_0 @cite_30 @cite_9 @cite_26 on the optimal repair bandwidth of binary MDS array codes.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_33", "@cite_8", "@cite_9", "@cite_29", "@cite_6", "@cite_24", "@cite_19", "@cite_0", "@cite_23", "@cite_10", "@cite_12" ], "mid": [ "2338020779", "2744811977", "2886748290", "2890964310", "2166011389", "2765335304", "2950891324", "2963754880", "2341012151", "2158646168", "2951303562", "2154063365", "2141222886", "2158398747" ], "abstract": [ "Binary maximum distance separable (MDS) array codes are widely used in storage systems. EVENODD code and row-diagonal parity (RDP) code are two well-known binary MDS array codes with two parity columns. The Blaum–Bruck–Vardy (BBV) code is an extension of the double-erasure-correcting EVENODD code. It is known that the BBV code is always MDS for three parity columns, and sufficient conditions for up to eight parity columns to be MDS are also known. However, the MDS condition for more than eight parity columns is an open problem since then. In this letter, we study the MDS condition of BBV code and give a sufficient MDS condition of the BBV code with more than eight parity columns.", "Binary maximum distance separable (MDS) array codes are a special class of erasure codes for distributed storage that not only provide fault tolerance with minimum storage redundancy, but also achieve low computational complexity. They are constructed by encoding k information columns into r parity columns, in which each element in a column is a bit, such that any k out of the k + r columns suffice to recover all information bits. In addition to providing fault tolerance, it is critical to improve repair performance. Specifically, if a single column fails, our goal is to minimize the repair bandwidth by downloading the least amount of bits from d non-failed columns, where k ≤ d ≤ k + r − 1. However, existing binary MDS codes that achieve high data rates (i.e., k (k + r) > 1 2) and minimum repair bandwidth only support double fault tolerance (i.e., r = 2), which is insufficient for failure-prone distributed storage environments in practice. This paper fills the void by proposing an explicit construction of triple-fault-tolerant (i.e., r = 3) binary MDS array codes that achieve asymptotically minimum repair bandwidth for d = k + 1.", "Binary maximum distance separable (MDS) array codes contain @math information columns and @math parity columns in which each entry is a bit that can tolerate @math arbitrary erasures. When a column in an MDS code fails, it has been proven that we must download at least half of the content from each helper column if @math columns are selected as the helper columns. If the lower bound is achieved such that the @math helper columns can be selected from any @math surviving columns, then the repair is an optimal repair. Otherwise, if the lower bound is achieved with @math specific helper columns, the repair is a weak-optimal repair. This paper proposes a class of binary MDS array codes with @math and @math that asymptotically achieve weak-optimal repair of an information column with @math helper columns. We show that there exist many encoding matrices such that the corresponding binary MDS array codes can asymptotically achieve weak-optimal repair for repairing any information column.", "Consider a binary maximum distance separate (MDS) array code composed of an @math array of bits with @math information columns and @math parity columns, such that any @math out of @math columns suffice to reconstruct the @math information columns. Our goal is to provide optimal repair access for binary MDS array codes, meaning that the bandwidth triggered to repair any single failed information or parity column is minimized. In this paper, we propose a generic transformation framework for binary MDS array codes, using EVENODD codes as a motivating example, to support optimal repair access for @math , where @math denotes the number of non-failed columns that are connected for repair. In addition, we show that the efficient decoding property of the original EVENODD codes is maintained after transformation. Furthermore, we show how our transformation framework applies to general binary MDS array codes and enables them to support optimal repair access.", "A new family of maximum distance separable (MDS) array codes is presented. The code arrays contain p information columns and r independent parity columns, each column consisting of p-1 bits, where p is a prime. We extend a previously known construction for the case r=2 to three and more parity columns. It is shown that when r=3 such extension is possible for any prime p. For larger values of r, we give necessary and sufficient conditions for our codes to be MDS, and then prove that if p belongs to a certain class of primes these conditions are satisfied up to r spl les 8. One of the advantages of the new codes is that encoding and decoding may be accomplished using simple cyclic shifts and XOR operations on the columns of the code array. We develop efficient decoding procedures for the case of two- and three-column errors. This again extends the previously known results for the case of a single-column error. Another primary advantage of our codes is related to the problem of efficient information updates. We present upper and lower bounds on the average number of parity bits which have to be updated in an MDS code over GF (2 sup m ), following an update in a single information bit. This average number is of importance in many storage applications which require frequent updates of information. We show that the upper bound obtained from our codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.", "Array codes have been widely used in communication and storage systems. To reduce computational complexity, one important property of the array codes is that only exclusive OR operations are used in the encoding and decoding processes. Cauchy Reed–Solomon codes, Rabin-like codes, and circulant Cauchy codes are existing Cauchy maximum-distance separable (MDS) array codes that employ Cauchy matrices over finite fields, circular permutation matrices, and circulant Cauchy matrices, respectively. All these codes can correct any number of failures; however, a critical drawback of existing codes is the high decoding complexity. In this paper, we propose a new construction of Rabin-like codes based on a quotient ring with a cyclic structure. The newly constructed Rabin-like codes have more supported parameters (prime @math is extended to an odd number), such that the world sizes of them are more flexible than the existing Cauchy MDS array codes. An efficient decoding method using LU factorization of the Cauchy matrix can be applied to the newly constructed Rabin-like codes. It is shown that the decoding complexity of the proposed approach is less than that of existing Cauchy MDS array codes. Hence, the Rabin-like codes based on the new construction are attractive to distributed storage systems.", "MDS (maximum distance separable) array codes are widely used in storage systems due to their computationally efficient encoding and decoding procedures. An MDS code with r redundancy nodes can correct any r erasures by accessing (reading) all the remaining information in both the systematic nodes and the parity (redundancy) nodes. However, in practice, a single erasure is the most likely failure event; hence, a natural question is how much information do we need to access in order to rebuild a single storage node? We define the rebuilding ratio as the fraction of remaining information accessed during the rebuilding of a single erasure. In our previous work we showed that the optimal rebuilding ratio of 1 r is achievable (using our newly constructed array codes) for the rebuilding of any systematic node, however, all the information needs to be accessed for the rebuilding of the parity nodes. Namely, constructing array codes with a rebuilding ratio of 1 r was left as an open problem. In this paper, we solve this open problem and present array codes that achieve the lower bound of 1 r for rebuilding any single systematic or parity node.", "Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes ( , 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math , where @math . The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math . Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property.", "Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure-tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes (Dimakis et. al., 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property. Explicit constructions of such codes in the literature are only available for the cases where there are at most 3 parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math where @math The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property.", "In distributed storage systems that use coding, the issue of minimizing the communication required to rebuild a storage node after a failure arises. We consider the problem of repairing an erased node in a distributed storage system that uses an EVENODD code. EVENODD codes are maximum distance separable (MDS) array codes that are used to protect against erasures, and only require XOR operations for encoding and decoding. We show that when there are two redundancy nodes, to rebuild one erased systematic node, only 3 4 of the information needs to be transmitted. Interestingly, in many cases, the required disk I O is also minimized.", "In distributed storage systems that use coding, the issue of minimizing the communication required to rebuild a storage node after a failure arises. We consider the problem of repairing an erased node in a distributed storage system that uses an EVENODD code. EVENODD codes are maximum distance separable (MDS) array codes that are used to protect against erasures, and only require XOR operations for encoding and decoding. We show that when there are two redundancy nodes, to rebuild one erased systematic node, only 3 4 of the information needs to be transmitted. Interestingly, in many cases, the required disk I O is also minimized.", "In distributed storage systems that employ erasure coding, the issue of minimizing the total communication required to exactly rebuild a storage node after a failure arises. This repair bandwidth depends on the structure of the storage code and the repair strategies used to restore the lost data. Designing high-rate maximum-distance separable (MDS) codes that achieve the optimum repair communication has been a well-known open problem. Our work resolves, in part, this open problem. In this study, we use Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities. Our construction relies on a novel method of achieving perfect interference alignment over finite fields with a finite number of symbol extensions. We generalize this construction to design @math -parity MDS codes that achieve the optimum repair communication for single systematic node failures.", "MDS array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct then the rebuilding ratio is 1 (access all the remaining information). However, the interesting (and more practical) case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between 1 over 2 and 3 over 4, however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a 2-erasure correcting code, the rebuilding ratio is 1 over 2. In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of 1 over r in the case of a single erasure. Our array codes have efficient encoding and decoding algorithms (for the case r = 2 they use a finite field of size 3) and an optimal update property.", "MDS codes are erasure-correcting codes that can correct the maximum number of erasures given the number of redundancy or parity symbols. If an MDS code has r parities and no more than r erasures occur, then by transmitting all the remaining data in the code one can recover the original information. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1 r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we ask the following question: given the length of the column l, can we construct high-rate MDS array codes with optimal repair bandwidth of 1 r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r + l)log r l." ] }
1907.08915
2963092220
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 + - 0.016 (mean + - std) and an average symmetric surface distance (ASD) of 0.994 + - 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 + - 0.031 DC and 1.556 + - 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
Segmentation of muscle tissue and fat tissue has been studied extensively for the analysis of muscle fat composition. (Note that we refer to muscle tissue here as an object including all muscles, not an individual muscle.) @cite_9 and @cite_18 implemented an algorithm for automated segmentation of the muscle and fat tissues from MRI using a multi-atlas method @cite_12 . @cite_13 used deep learning for segmentation of the muscle and fat tissues in a 2D abdominal CT slice.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_18", "@cite_12" ], "mid": [ "74568156", "1986399714", "2796249795", "2790662084" ], "abstract": [ "In this paper, we propose a novel approach for segmenting the skelet al muscles in MRI automatically. In order to deal with the absence of contrast between the different muscle classes, we proposed a principled mathematical formulation that integrates prior knowledge with a random walks graph-based formulation. Prior knowledge is represented using a statistical shape atlas that once coupled with the random walks segmentation leads to an efficient iterative linear optimization system. We reveal the potential of our approach on a challenging set of real clinical data.", "We present a novel probabilistic shape representation that implicitly includes prior anatomical volume and adjacency information, termed the generalized log-ratio (GLR) representation. We demonstrate the usefulness of this representation in the task of thigh muscle segmentation. Analysis of the shapes and sizes of thigh muscles can lead to a better understanding of the effects of chronic obstructive pulmonary disease (COPD), which often results in skelet al muscle weakness in lower limbs. However, segmenting these muscles from one another is difficult due to a lack of distinctive features and inter-muscular boundaries that are difficult to detect. We overcome these difficulties by building a shape model in the space of GLR representations. We remove pose variability from the model by employing a presegmentation-based alignment scheme. We also design a rotationally invariant random forest boundary detector that learns common appearances of the interface between muscles from training data. We combine the shape model and the boundary detector into a fully automatic globally optimal segmentation technique. Our segmentation technique produces a probabilistic segmentation that can be used to generate uncertainty information, which can be used to aid subsequent analysis. Our experiments on challenging 3D magnetic resonance imaging data sets show that the use of the GLR representation improves the segmentation accuracy, and yields an average Dice similarity coefficient of @math , comparable to other state-of-the-art thigh segmentation techniques.", "Purpose Patient-specific quantitative assessments of muscle mass and biomechanical musculoskelet al simulations require segmentation of the muscles from medical images. The objective of this work is to automate muscle segmentation from CT data of the hip and thigh.", "The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79 and 67 by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images." ] }
1907.08915
2963092220
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 + - 0.016 (mean + - std) and an average symmetric surface distance (ASD) of 0.994 + - 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 + - 0.031 DC and 1.556 + - 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
Segmentation of individual muscles is a much more difficult problem due to the low tissue contrast at the border between neighboring muscles, especially in the area where many muscles are contiguously packed such as in the hip and thigh regions. @cite_15 manually performed segmentation of 35 individual muscles from MRIs of the lower leg in order to investigate the relationship between muscle volume and height or weight. To facilitate automation of the individual muscular segmentation, prior knowledge about the shape of each muscle has been introduced @cite_39 . @cite_31 proposed an automated segmentation method for 11 thigh muscles from MRI using a probabilistic shape representation and adjacency information. They evaluated the method using images of the middle part of the left femur (20 cm in length until just above the knee) and reported an accuracy of 0.808 average Dice coefficient. Since the muscles of interest run along a long bone, i.e., the femur, the muscles have similar appearances in axial slices resulting in less complexity in segmentation compared to the hip region.
{ "cite_N": [ "@cite_15", "@cite_31", "@cite_39" ], "mid": [ "1986399714", "74568156", "2798753173" ], "abstract": [ "We present a novel probabilistic shape representation that implicitly includes prior anatomical volume and adjacency information, termed the generalized log-ratio (GLR) representation. We demonstrate the usefulness of this representation in the task of thigh muscle segmentation. Analysis of the shapes and sizes of thigh muscles can lead to a better understanding of the effects of chronic obstructive pulmonary disease (COPD), which often results in skelet al muscle weakness in lower limbs. However, segmenting these muscles from one another is difficult due to a lack of distinctive features and inter-muscular boundaries that are difficult to detect. We overcome these difficulties by building a shape model in the space of GLR representations. We remove pose variability from the model by employing a presegmentation-based alignment scheme. We also design a rotationally invariant random forest boundary detector that learns common appearances of the interface between muscles from training data. We combine the shape model and the boundary detector into a fully automatic globally optimal segmentation technique. Our segmentation technique produces a probabilistic segmentation that can be used to generate uncertainty information, which can be used to aid subsequent analysis. Our experiments on challenging 3D magnetic resonance imaging data sets show that the use of the GLR representation improves the segmentation accuracy, and yields an average Dice similarity coefficient of @math , comparable to other state-of-the-art thigh segmentation techniques.", "In this paper, we propose a novel approach for segmenting the skelet al muscles in MRI automatically. In order to deal with the absence of contrast between the different muscle classes, we proposed a principled mathematical formulation that integrates prior knowledge with a random walks graph-based formulation. Prior knowledge is represented using a statistical shape atlas that once coupled with the random walks segmentation leads to an efficient iterative linear optimization system. We reveal the potential of our approach on a challenging set of real clinical data.", "We consider the problem of segmenting a biomedical image into anatomical regions of interest. We specifically address the frequent scenario where we have no paired training data that contains images and their manual segmentations. Instead, we employ unpaired segmentation images that we use to build an anatomical prior. Critically these segmentations can be derived from imaging data from a different dataset and imaging modality than the current task. We introduce a generative probabilistic model that employs the learned prior through a convolutional neural network to compute segmentations in an unsupervised setting. We conducted an empirical analysis of the proposed approach in the context of structural brain MRI segmentation, using a multi-study dataset of more than 14,000 scans. Our results show that an anatomical prior enables fast unsupervised segmentation which is typically not possible using standard convolutional networks. The integration of anatomical priors can facilitate CNN-based anatomical segmentation in a range of novel clinical problems, where few or no annotations are available and thus standard networks are not trainable. The code, model definitions and model weights are freely available at http: github.com adalca neuron." ] }
1907.08915
2963092220
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 + - 0.016 (mean + - std) and an average symmetric surface distance (ASD) of 0.994 + - 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 + - 0.031 DC and 1.556 + - 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
In CT images, due to the lower soft tissue contrast compared to MRI, segmentation of individual muscles is even more difficult. @cite_21 addressed the automated segmentation of individual muscles from CTs of the hip and thigh regions. The target region was broader than @cite_31 covering the origin to insertion of 19 muscles. They introduced a hierarchization of the multi-atlas segmentation method such that the target region becomes gradually more complex in a hierarchical manner, namely starting with skin surface, then all muscle tissues as one object, and finally individual muscles at each hierarchy. They reported an average Dice coefficient of 0.838. Although their algorithm produced a reasonable accuracy for this highly challenging problem, due to the large number of non-rigid registrations required in the multi-atlas method, computational load was prohibitive when considering routine clinical applications (41 minutes for segmentation of one CT volume using a high performance server with 60 cores).
{ "cite_N": [ "@cite_31", "@cite_21" ], "mid": [ "2074208271", "1568207135" ], "abstract": [ "We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993 in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535 success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993 success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.", "This work addresses the challenging problem of simultaneously segmenting multiple anatomical structures in highly varied CT scans. We propose the entangled decision forest (EDF) as a new discriminative classifier which augments the state of the art decision forest, resulting in higher prediction accuracy and shortened decision time. Our main contribution is two-fold. First, we propose entangling the binary tests applied at each tree node in the forest, such that the test result can depend on the result of tests applied earlier in the same tree and at image points offset from the voxel to be classified. This is demonstrated to improve accuracy and capture long-range semantic context. Second, during training, we propose injecting randomness in a guided way, in which node feature types and parameters are randomly drawn from a learned (non-uniform) distribution. This further improves classification accuracy. We assess our probabilistic anatomy segmentation technique using a labeled database of CT image volumes of 250 different patients from various scan protocols and scanner vendors. In each volume, 12 anatomical structures have been manually segmented. The database comprises highly varied body shapes and sizes, a wide array of pathologies, scan resolutions, and diverse contrast agents. Quantitative comparisons with state of the art algorithms demonstrate both superior test accuracy and computational efficiency." ] }
1907.08915
2963092220
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 + - 0.016 (mean + - std) and an average symmetric surface distance (ASD) of 0.994 + - 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 + - 0.031 DC and 1.556 + - 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
In order to enhance the accuracy and speed of the muscle segmentation in CT, we propose an application of convolutional neural networks (CNNs). We investigate the segmentation accuracy as well as a metric indicating uncertainty of the segmentation using the framework of Bayesian deep learning. Yarin @cite_40 found that the dropout @cite_35 is equivalent to approximating the Bayesian inference, which allows estimation of the model uncertainty. It measures the degree of difference of each test sample from the training data set, originated from the deficiency of training data, namely epistemic uncertainty @cite_22 . This method has been applied to brain lesion segmentation @cite_3 @cite_11 and surgical tool segmentation @cite_23 . Two example applications of the uncertainty metric explored in this study are; 1) prediction of segmentation accuracy without using the ground truth similar to the goal of @cite_6 and, 2) the active-learning framework @cite_16 @cite_4 for the reduction of manual annotation costs.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_22", "@cite_3", "@cite_6", "@cite_40", "@cite_23", "@cite_16", "@cite_11" ], "mid": [ "2611197392", "2737244778", "2786129249", "2751665805", "2887926055", "2949561789", "2615461745", "2963998559", "2899943572" ], "abstract": [ "Training deep fully convolutional neural networks (F-CNNs) for semantic image segmentation requires access to abundant labeled data. While large datasets of unlabeled image data are available in medical applications, access to manually labeled data is very limited. We propose to automatically create auxiliary labels on initially unlabeled data with existing tools and to use them for pre-training. For the subsequent fine-tuning of the network with manually labeled data, we introduce error corrective boosting (ECB), which emphasizes parameter updates on classes with lower accuracy. Furthermore, we introduce SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that combines skip connections with the unpooling strategy for upsampling. The SD-Net addresses challenges of severe class imbalance and errors along boundaries. With application to whole-brain MRI T1 scan segmentation, we generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on two datasets with manual annotations. Our results show that the inclusion of auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D scan in 7 secs in comparison to 30 hours for the closest multi-atlas segmentation method, while reaching similar performance. It also outperforms the latest state-of-the-art F-CNN models.", "This paper presents methods to reduce the complexity of convolutional neural networks (CNN). These include: (1) A method to quickly and easily sparsify a given network. (2) Fine tune the sparse network to obtain the lost accuracy back (3) Quantize the network to be able to implement it using 8-bit fixed point multiplications efficiently. (4) We then show how an inference engine can be designed to take advantage of the sparsity. These techniques were applied to full frame semantic segmentation and the degradation due to the sparsity and quantization is found to be negligible. We show by analysis that the complexity reduction achieved is significant. Results of implementation on Texas Instruments TDA2x SoC [17] are presented. We have modified Caffe CNN framework to do the sparse, quantized training described in this paper. The source code for the training is made available at https: github.com tidsp caffe-jacinto", "Recently, the convolutional neural network (CNN) has been successfully applied to the task of brain tumor segmentation. However, the effectiveness of a CNN-based method is limited by the small receptive field, and the segmentation results don’t perform well in the spatial contiguity. Therefore, many attempts have been made to strengthen the spatial contiguity of the network output. In this paper, we proposed an adversarial training approach to train the CNN network. A discriminator network is trained along with a generator network which produces the synthetic segmentation results. The discriminator network is encouraged to discriminate the synthetic labels from the ground truth labels. Adversarial adjustments provided by the discriminator network are fed back to the generator network to help reduce the differences between the synthetic labels and the ground truth labels and reinforce the spatial contiguity with high-order loss terms. The presented method is evaluated on the Brats2017 training dataset. The experiment results demonstrate that the presented method could enhance the spatial contiguity of the segmentation results and improve the segmentation accuracy.", "Training a fully convolutional network for pixel-wise (or voxel-wise) image segmentation normally requires a large number of training images with corresponding ground truth label maps. However, it is a challenge to obtain such a large training set in the medical imaging domain, where expert annotations are time-consuming and difficult to obtain. In this paper, we propose a semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data. The network parameters and the segmentations for the unlabelled data are alternately updated. We evaluate the method for short-axis cardiac MR image segmentation and it has demonstrated a high performance, outperforming a baseline supervised method. The mean Dice overlap metric is 0.92 for the left ventricular cavity, 0.85 for the myocardium and 0.89 for the right ventricular cavity. It also outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster.", "Deep learning (DL) networks have recently been shown to outperform other segmentation methods on various public, medical-image challenge datasets [3, 11, 16], especially for large pathologies. However, in the context of diseases such as Multiple Sclerosis (MS), monitoring all the focal lesions visible on MRI sequences, even very small ones, is essential for disease staging, prognosis, and evaluating treatment efficacy. Moreover, producing deterministic outputs hinders DL adoption into clinical routines. Uncertainty estimates for the predictions would permit subsequent revision by clinicians. We present the first exploration of multiple uncertainty estimates based on Monte Carlo (MC) dropout [4] in the context of deep networks for lesion detection and segmentation in medical images. Specifically, we develop a 3D MS lesion segmentation CNN, augmented to provide four different voxel-based uncertainty measures based on MC dropout. We train the network on a proprietary, large-scale, multi-site, multi-scanner, clinical MS dataset, and compute lesion-wise uncertainties by accumulating evidence from voxel-wise uncertainties within detected lesions. We analyze the performance of voxel-based segmentation and lesion-level detection by choosing operating points based on the uncertainty. Empirical evidence suggests that uncertainty measures consistently allow us to choose superior operating points compared only using the network’s sigmoid output as a probability.", "We consider the task of learning a classifier for semantic segmentation using weak supervision in the form of image labels which specify the object classes present in the image. Our method uses deep convolutional neural networks (CNNs) and adopts an Expectation-Maximization (EM) based approach. We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step). We show that saliency and attention maps, our bottom-up and top-down cues respectively, of simple images provide very good cues to learn an initialization for the EM-based algorithm. Intuitively, we show that before trying to learn to segment complex images, it is much easier and highly effective to first learn to segment a set of simple images and then move towards the complex ones. Next, in order to update the parameters, we propose minimizing the combination of the standard softmax loss and the KL divergence between the true latent posterior and the likelihood given by the CNN. We argue that this combination is more robust to wrong predictions made by the expectation step of the EM method. We support this argument with empirical and visual results. Extensive experiments and discussions show that: (i) our method is very simple and intuitive; (ii) requires only image-level labels; and (iii) consistently outperforms other weakly-supervised state-of-the-art methods with a very high margin on the PASCAL VOC 2012 dataset.", "We consider the task of learning a classifier for semantic segmentation using weak supervision in the form of image labels specifying objects present in the image. Our method uses deep convolutional neural networks (cnns) and adopts an Expectation-Maximization (EM) based approach. We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step). We show that saliency and attention maps, bottom-up and top-down cues respectively, of images with single objects (simple images) provide highly reliable cues to learn an initialization for the EM. Intuitively, given weak supervisions, we first learn to segment simple images and then move towards the complex ones. Next, for updating the parameters (M step), we propose to minimize the combination of the standard softmax loss and the KL divergence between the latent posterior distribution (obtained using the E-step) and the likelihood given by the cnn. This combination is more robust to wrong predictions made by the E step of the EM algorithm. Extensive experiments and discussions show that our method is very simple and intuitive, and outperforms the state-of-the-art method with a very high margin of 3.7 and 3.9 on the PASCAL VOC12 train and test sets respectively, thus setting new state-of-the-art results.", "During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.", "Annotating the right data for training deep neural networks is an important challenge. Active learning using uncertainty estimates from Bayesian Neural Networks (BNNs) could provide an effective solution to this. Despite being theoretically principled, BNNs require approximations to be applied to large-scale problems, where both performance and uncertainty estimation are crucial. In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep BNN. We conduct a series of large-scale visual active learning experiments to evaluate DPEs on classification with the CIFAR-10, CIFAR-100 and ImageNet datasets, and semantic segmentation with the BDD100k dataset. Our models require significantly less training data to achieve competitive performances, and steadily improve upon strong active learning baselines as the annotation budget is increased." ] }
1907.08931
2969534524
Regularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose @math penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Learning Rate Annealing: The stochastic gradient, calculated by a subset of data, gives a noise to gradient and provides an implicit regularization effect @cite_41 . In SGD, parameters are updated by subtracting the gradient with the stochastic noise multiplied by the learning rate. The learning rate should shrink in order to reduce the noise and converge the algorithm. To this aim, a variety of learning rate annealing, e.g. exponential @cite_44 and staircase @cite_53 , and the adaptive learning rates, e.g., AdaGrad @cite_31 , have been proposed, The sophisticated adaptive techniques, e.g., RMSprop @cite_36 and Adam @cite_9 , enable parameter-wise control of the learning rates. The drawback of learning rate techniques on the regularization is that it reduces or increases both the step-size and the noise.
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_53", "@cite_9", "@cite_44", "@cite_31" ], "mid": [ "2120420045", "2950351588", "2768267830", "1813485996", "2951196414", "2766164908" ], "abstract": [ "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.", "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.", "We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.", "Stochastic gradient algorithms have been the main focus of large-scale learning problems and they led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our preliminary experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.", "Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be O( (T) T), by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm, one can get an optimal O(1 T) rate. This might lead one to believe that standard SGD is suboptimal, and maybe should even be replaced as a method of choice. In this paper, we investigate the optimality of SGD in a stochastic setting. We show that for smooth problems, the algorithm attains the optimal O(1 T) rate. However, for non-smooth problems, the convergence rate with averaging might really be ( (T) T), and this is not just an artifact of the analysis. On the flip side, we show that a simple modification of the averaging step suffices to recover the O(1 T) rate, and no other change of the algorithm is necessary. We also present experimental results which support our findings, and point out open problems.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to @math validation accuracy in under 30 minutes." ] }
1907.08931
2969534524
Regularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose @math penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Dropout is another regularization technique that is in particular used with classical shallow networks. The dropout zeros the activation of randomly selected nodes with a certain probability during the training process @cite_50 . The dropping rate is generally set to be constant but its variants have been considered with adaptive rates depending on parameter value @cite_4 , estimated gradient variance @cite_13 , biased gradient estimator @cite_35 , layer depth @cite_26 , or marginal likelihood over noises @cite_47 . However, in fact, recent deep models do not support the dropout and its variants. The reason may be that the number of parameters in a layer is relatively smaller than the the classic neural networks, and random masking to nodes can be erroneous to the model.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_4", "@cite_50", "@cite_47", "@cite_13" ], "mid": [ "2890166761", "2885062394", "2963117513", "2583938035", "2136836265", "1034159276" ], "abstract": [ "Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that neurons in a contiguous region in convolutional layers are strongly correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where neurons in a contiguous region of a feature map are dropped together. Extensive experiments show that DropBlock works much better than dropout in regularizing convolutional networks. On ImageNet, DropBlock with ResNet-50 architecture achieves 77.65 accuracy, which is more than 1 improvement on the previous result of this architecture.", "Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (local minima, colinearity, feature discovery etc.) is called Dropout (Hinton, et al 2012, 2016). The Dropout algorithm removes hidden units according to a Bernoulli random variable with probability @math prior to each update, creating random \"shocks\" to the network that are averaged over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the Stochastic Delta Rule, or SDR (Hanson, 1990). SDR redefines each weight in the network as a random variable with mean @math and standard deviation @math . Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard Dropout in test error by approx. @math with DenseNet-BC 250 on CIFAR-100 and approx. @math in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 35 epochs.", "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves gener- alization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural net- works and provides structured sparsity, e.g. removes neurons and or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is com- puted in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.", "Deep convolutional networks have achieved successful performance in data mining field. However, training large networks still remains a challenge, as the training data may be insufficient and the model can easily get overfitted. Hence the training process is usually combined with a model regularization. Typical regularizers include weight decay, Dropout, etc. In this paper, we propose a novel regularizer, named Structured Decorrelation Constraint (SDC), which is applied to the activations of the hidden layers to prevent overfitting and achieve better generalization. SDC impels the network to learn structured representations by grouping the hidden units and encouraging the units within the same group to have strong connections during the training procedure. Meanwhile, it forces the units in different groups to learn non-redundant representations by minimizing the cross-covariance between them. Compared with Dropout, SDC reduces the co-adaptions between the hidden units in an explicit way. Besides, we propose a novel approach called Reg-Conv that can help SDC to regularize the complex convolutional layers. Experiments on extensive datasets show that SDC significantly reduces overfitting and yields very meaningful improvements on classification performance (on CIFAR-10 6.22 accuracy promotion and on CIFAR-100 9.63 promotion).", "Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50 of their activities. We describe a method called 'standout' in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This 'adaptive dropout network' can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80 and 5.8 errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures.", "Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage." ] }
1907.08931
2969534524
Regularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose @math penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Energy Landscape: The geometrical property of energy surface is helpful in optimization of highly complex non-convex problems associated with deep network architecture. It is preferred to drive a solution toward local minima on a flat energy surface that is considered to yield better generalization @cite_40 @cite_23 @cite_25 where flatness is defined around the minimum by its connected region, its curvature of the second order structure, and the width of its basin, respectively. A geometry-driven optimization based on SGD has been developed in deep learning problems such as Entropy-SGD @cite_23 . In our approach, we do not attempt to measure geometric property of loss landscape such as flatness with extra computational cost, but instead consider explicit regularization to model parameters.
{ "cite_N": [ "@cite_40", "@cite_25", "@cite_23" ], "mid": [ "2949568260", "2552194003", "2626325961" ], "abstract": [ "This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.", "This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent neural networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.", "We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by (2016): Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the flatness of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create large connected components at the bottom of the landscape. Second, the dependence of small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light into the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin." ] }
1907.08931
2969534524
Regularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose @math penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Variance Reduction: The variance of stochastic gradients is detrimental to SGD, motivating variance reduction techniques @cite_10 @cite_33 @cite_29 @cite_15 @cite_34 @cite_1 @cite_48 that aim to reduce the variance incurred due to their stochastic process of estimation, and improve the convergence rate mainly for convex optimization while some are extended to non-convex problems @cite_42 @cite_30 @cite_32 . One of the most practical algorithms for better convergence rates includes momentum @cite_37 , modified momentum for accelerated gradient @cite_3 , and stochastic estimation of accelerated gradient (Accelerated-SGD) @cite_38 . These algorithms are more focused on the efficiency in convergence than the generalization of model for accuracy.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_33", "@cite_48", "@cite_29", "@cite_42", "@cite_1", "@cite_32", "@cite_3", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2962798535", "2769644379", "1501362825", "2120420045", "2950351588", "1813485996", "2963965485", "2304667012", "2951196414", "2500690480", "2574426558", "2399042127", "1574269637" ], "abstract": [ "We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms—a crucial requirement for modern large-scale applications—have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.", "Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always guarantee convergence, and it is not clear whether they can be improved. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes for gradient updates which provides convergence guarantees. QSGD allows the user to smoothly trade off and : nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For example, on 16GPUs, we can train the ResNet152 network to full accuracy on ImageNet 1.8x faster than the full-precision variant.", "We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms---a crucial requirement for modern large-scale applications---have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.", "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.", "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.", "Stochastic gradient algorithms have been the main focus of large-scale learning problems and they led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our preliminary experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.", "We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain nonasymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.", "We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary points) of SVRG for nonconvex optimization, and show that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants of SVRG, showing (theoretical) linear speedup due to mini-batching in parallel settings.", "Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be O( (T) T), by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm, one can get an optimal O(1 T) rate. This might lead one to believe that standard SGD is suboptimal, and maybe should even be replaced as a method of choice. In this paper, we investigate the optimality of SGD in a stochastic setting. We show that for smooth problems, the algorithm attains the optimal O(1 T) rate. However, for non-smooth problems, the convergence rate with averaging might really be ( (T) T), and this is not just an artifact of the analysis. On the flip side, we show that a simple modification of the averaging step suffices to recover the O(1 T) rate, and no other change of the algorithm is necessary. We also present experimental results which support our findings, and point out open problems.", "We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of @math iterations, Epro-SGD requires only @math projections, and meanwhile attains an optimal convergence rate of @math , both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods.", "Variance Reducing (VR) stochastic methods are fast-converging alternatives to the classical Stochastic Gradient Descent (SGD) for solving large-scale regularized finite sum problems, especially when a highly accurate solution is required. One critical step in VR is the function sampling. State-of-the-art VR algorithms such as SVRG and SAGA, employ either Uniform Probability (UP) or Importance Probability (IP), which is deficient in reducing the variance and hence leads to suboptimal convergence rate. In this paper, we propose a novel sampling scheme that explicitly computes some Adaptive Probability (AP) at each iteration. Analysis shows that, equipped with AP, both SVRG and SAGA yield provably better convergence rate than the ones with UP or IP, which is confirmed in experiments. Additionally, to cut down the per iteration computation load, an efficient variant is proposed by utilizing AP periodically, whose performance is empirically validated.", "Two types of low cost-per-iteration gradient descent methods have been extensively studied in parallel. One is online or stochastic gradient descent (OGD SGD), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types of methods together and propose online randomized block coordinate descent (ORBCD). At each iteration, ORBCD only computes the partial gradient of one block coordinate of one mini-batch samples. ORBCD is well suited for the composite minimization problem where one function is the average of the losses of a large number of samples and the other is a simple regularizer defined on high dimensional variables. We show that the iteration complexity of ORBCD has the same order as OGD or SGD. For strongly convex functions, by reducing the variance of stochastic gradients, we show that ORBCD can converge at a geometric rate in expectation, matching the convergence rate of SGD with variance reduction and RBCD.", "Stochastic gradient descent (SGD) on a low-rank factorization is commonly employed to speed up matrix problems including matrix completion, subspace tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for SGD on a low-rank least-squares problem, and we prove that, under broad sampling conditions, our method converges globally from a random starting point within @math steps with constant probability for constant-rank problems. Our modification of SGD relates it to stochastic power iteration. We also show experiments to illustrate the runtime and convergence of the algorithm." ] }
1907.08931
2969534524
Regularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose @math penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Weight-Decay: is an explicit way of regularization such that a regularization term is added into the energy function. Specifically @math -norm is used as the regularization term in order to penalize large weight values. Different with the other implicit methods, e.g., stochastic update and dropout, one can directly control the regularization effect by the weight-decay coefficient. The weight-decay coefficient is tuned by hand @cite_14 @cite_11 , or learned by Bayesian optimization @cite_22 @cite_0 . However, in contrast to recent development of adaptive methods of dropout @cite_4 @cite_13 @cite_35 @cite_26 @cite_47 and learning-rate @cite_31 @cite_36 @cite_9 in deep optimization, a constant weight-decay coefficient has been employed in usual. Layer-wise weight-decay has been considered in @cite_52 @cite_21 where different weight-decay coefficients are given for different layers of network model using the variance of gradients in layer. The drawback of the layer-wise method @cite_52 @cite_21 is that it assumes that layers are aligned in a single sequence. The skip-connection @cite_7 @cite_12 @cite_24 , that is one of the key architectures in the recent deep networks, makes it non-trivial.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_14", "@cite_4", "@cite_22", "@cite_26", "@cite_7", "@cite_36", "@cite_9", "@cite_21", "@cite_52", "@cite_0", "@cite_24", "@cite_47", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2898564876", "2788597178", "2885062394", "2583938035", "2144513243", "2125059447", "2798393061", "2553902701", "2962980542", "2787919999", "2626017178", "2964292098", "2963117513", "2119883478", "2907176385", "2519655882", "2582745083" ], "abstract": [ "Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of @math regularization. Literal weight decay has been shown to outperform @math regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks.", "In this paper, we propose layer-wise weight decay for efficient training of deep neural networks. Our method sets different values of the weight-decay coefficients layer by layer so that the ratio of the scale of back-propagated gradients and that of the weight decay is constant throughout the network. By utilizing such a setting, we can avoid under or over-fitting and train all layers properly without having to tune the coefficients layer by layer. Experimental results show that our method can enhance the performance of existing deep neural networks without any change of network models.", "Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (local minima, colinearity, feature discovery etc.) is called Dropout (Hinton, et al 2012, 2016). The Dropout algorithm removes hidden units according to a Bernoulli random variable with probability @math prior to each update, creating random \"shocks\" to the network that are averaged over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the Stochastic Delta Rule, or SDR (Hanson, 1990). SDR redefines each weight in the network as a random variable with mean @math and standard deviation @math . Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard Dropout in test error by approx. @math with DenseNet-BC 250 on CIFAR-100 and approx. @math in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 35 epochs.", "Deep convolutional networks have achieved successful performance in data mining field. However, training large networks still remains a challenge, as the training data may be insufficient and the model can easily get overfitted. Hence the training process is usually combined with a model regularization. Typical regularizers include weight decay, Dropout, etc. In this paper, we propose a novel regularizer, named Structured Decorrelation Constraint (SDC), which is applied to the activations of the hidden layers to prevent overfitting and achieve better generalization. SDC impels the network to learn structured representations by grouping the hidden units and encouraging the units within the same group to have strong connections during the training procedure. Meanwhile, it forces the units in different groups to learn non-redundant representations by minimizing the cross-covariance between them. Compared with Dropout, SDC reduces the co-adaptions between the hidden units in an explicit way. Besides, we propose a novel approach called Reg-Conv that can help SDC to regularize the complex convolutional layers. Experiments on extensive datasets show that SDC significantly reduces overfitting and yields very meaningful improvements on classification performance (on CIFAR-10 6.22 accuracy promotion and on CIFAR-100 9.63 promotion).", "It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk.", "Weight decay was proposed to reduce overfitting which often appears in the generalization tasks of artificial neural nets. Here weight decay is applied to a well defined model system based on a single layer perceptron, which exhibits strong overfitting. Since we know for this system the optimal nonoverfitting solution we can compare the effect of the weight decay with this solution. A strategy to find the optimal weight decay strength, which leads to the optimal solution for any number of examples, is proposed.", "A major challenge in training deep neural networks is overfitting, i.e. inferior performance on unseen test examples compared to performance on training examples. To reduce overfitting, stochastic regularization methods have shown superior performance compared to deterministic weight penalties on a number of image recognition tasks. Stochastic methods such as Dropout and Shakeout, in expectation, are equivalent to imposing a ridge and elastic-net penalty on the model parameters, respectively. However, the choice of the norm of weight penalty is problem dependent and is not restricted to @math . Therefore, in this paper we propose the Bridgeout stochastic regularization technique and prove that it is equivalent to an @math penalty on the weights, where the norm @math can be learned as a hyperparameter from data. Experimental results show that Bridgeout results in sparse model weights, improved gradients and superior classification performance compared to Dropout and Shakeout on synthetic and real datasets.", "Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.", "Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.", "The impressive success of modern deep neural networks on computer vision tasks has been achieved through models of very large capacity compared to the number of available training examples. This overparameterization is often said to be controlled with the help of different regularization techniques, mainly weight decay and dropout. However, since these techniques reduce the effective capacity of the model, typically even deeper and wider architectures are required to compensate for the reduced capacity. Therefore, there seems to be a waste of capacity in this practice. In this paper we build upon recent research that suggests that explicit regularization may not be as important as widely believed and carry out an ablation study that concludes that weight decay and dropout may not be necessary for object recognition if enough data augmentation is introduced.", "Batch Normalization is a commonly used trick to improve the training of deep neural networks. These neural networks use L2 regularization, also called weight decay, ostensibly to prevent overfitting. However, we show that L2 regularization has no regularizing effect when combined with normalization. Instead, regularization has an influence on the scale of weights, and thereby on the effective learning rate. We investigate this dependence, both in theory, and experimentally. We show that popular optimization methods such as ADAM only partially eliminate the influence of normalization on the learning rate. This leads to a discussion on other ways to mitigate this issue.", "We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures &#x2014; stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers &#x2014; demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.", "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves gener- alization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural net- works and provides structured sparsity, e.g. removes neurons and or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is com- puted in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.", "Under certain conditions (known as the restricted isometry property, or RIP) on the mN matrix ˆ (where m<N ), vectors x 2 R N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y WDx even though ˆ � 1 .y is typically an .Nm -dimensional hyperplane; in addition, x is then equal to the element in ˆ � 1 .y of minimal 1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in ˆ � 1 .y with smallest 2.w -norm. If x .n is the solution at itera- tion step n, then the new weight w .n is defined by w .n WD Œjx .n j 2 C \" 2� � 1=2 , i D 1; :::;N , for a decreasing sequence of adaptively defined \"n; this updated weight is then used to obtain x .nC1 and the process is repeated. We prove that whensatisfies the RIP conditions, the sequence x .n converges for all y, re- gardless of whether ˆ � 1 .y contains a sparse vector. If there is a sparse vector in ˆ � 1 .y , then the limit is this sparse vector, and when x .n is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same al- gorithm with the \"heavier\" weight w .n D Œjx .n", "As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other multiplicative noise based variational distributions has not been investigated in depth. We evaluated Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We tested the calibration of the probabilistic predictions of Bayesian convolutional neural networks (CNNs) on MNIST and CIFAR-10. Sampling at prediction time increased the calibration of the DNNs' probabalistic predictions. Sampling weights, whether Gaussian or Bernoulli, led to more robust representation of uncertainty compared to sampling of units. However, using either Gaussian or Bernoulli dropout led to increased test set classification accuracy. Based on these findings we used both Bernoulli dropout and Gaussian dropconnect concurrently, which we show approximates the use of a spike-and-slab variational distribution without increasing the number of learned parameters. We found that spike-and-slab sampling had higher test set performance than Gaussian dropconnect and more robustly represented its uncertainty compared to Bernoulli dropout.", "We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures --- stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers --- demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.", "We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy." ] }
1901.07880
2913243196
The Chinese pronunciation system offers two characteristics that distinguish it from other languages: deep phonemic orthography and intonation variations. We are the first to argue that these two important properties can play a major role in Chinese sentiment analysis. Particularly, we propose two effective features to encode phonetic information. Next, we develop a Disambiguate Intonation for Sentiment Analysis (DISA) network using a reinforcement network. It functions as disambiguating intonations for each Chinese character (pinyin). Thus, a precise phonetic representation of Chinese is learned. Furthermore, we also fuse phonetic features with textual and visual features in order to mimic the way humans read and understand Chinese text. Experimental results on five different Chinese sentiment analysis datasets show that the inclusion of phonetic features significantly and consistently improves the performance of textual and visual representations and outshines the state-of-the-art Chinese character level representations.
One-hot representation is the initial numeric word representation method in NLP. However, it usually leads to a problem of high dimensionality and sparsity. To solve this problem, distributed representation (or word embedding) @cite_25 is proposed. Word embedding is a representation which maps words into low dimensional vectors of real numbers by using neural networks. The key idea is based on distributional hypothesis so as to model how to represent context words and the relation between context words and target word.
{ "cite_N": [ "@cite_25" ], "mid": [ "2267742292" ], "abstract": [ "Distributed representations of words (aka word embedding) have proven helpful in solving natural language processing (NLP) tasks. Training distributed representations of words with neural networks has lately been a major focus of researchers in the field. Recent work on word embedding, the Continuous Bag-of-Words (CBOW) model and the Continuous Skip-gram (Skip-gram) model, have produced particularly impressive results, significantly speeding up the training process to enable word representation learning from largescale data. However, both CBOW and Skip-gram do not pay enough attention to word proximity in terms of model or word ambiguity in terms of linguistics. In this paper, we propose Proximity-Ambiguity Sensitive (PAS) models (i.e. PAS CBOW and PAS Skip-gram) to produce high quality distributed representations of words considering both word proximity and ambiguity. From the model perspective, we introduce proximity weights as parameters to be learned in PAS CBOWand used in PAS Skip-gram. By better modeling word proximity, we reveal the strength of pooling-structured neural networks in word representation learning. The proximitysensitive pooling layer can also be applied to other neural network applications that employ pooling layers. From the linguistics perspective, we train multiple representation vectors per word. Each representation vector corresponds to a particular group of POS tags of the word. By using PAS models, we achieved a 16.9 increase in accuracy over state-of-theart models." ] }
1901.07880
2913243196
The Chinese pronunciation system offers two characteristics that distinguish it from other languages: deep phonemic orthography and intonation variations. We are the first to argue that these two important properties can play a major role in Chinese sentiment analysis. Particularly, we propose two effective features to encode phonetic information. Next, we develop a Disambiguate Intonation for Sentiment Analysis (DISA) network using a reinforcement network. It functions as disambiguating intonations for each Chinese character (pinyin). Thus, a precise phonetic representation of Chinese is learned. Furthermore, we also fuse phonetic features with textual and visual features in order to mimic the way humans read and understand Chinese text. Experimental results on five different Chinese sentiment analysis datasets show that the inclusion of phonetic features significantly and consistently improves the performance of textual and visual representations and outshines the state-of-the-art Chinese character level representations.
Chinese text differs from English text for two key aspects: it does not have word segmentations and it has a characteristic of compositionality due to its pictogram nature. Based on the former aspect, word segmentation tools are always employed before text representation, such as ICTCLAS @cite_45 , THULAC @cite_19 , Jieba github.com fxsjy jieba and so forth. Based on the latter aspect, several works had focused on the use of sub-word components (such as characters and radicals) to improve word embeddings. @cite_57 proposed decomposition of Chinese words into characters and presented a character-enhanced word embedding model (CWE). @cite_34 @cite_31 had decomposed Chinese characters to radicals and developed a radical-enhanced Chinese character embedding. In @cite_53 , pure radical based embeddings were trained for short-text categorization, Chinese word segmentation and web search ranking. @cite_37 extend the pure radical embedding by introducing multi-granularity Chinese word embeddings.
{ "cite_N": [ "@cite_37", "@cite_53", "@cite_57", "@cite_19", "@cite_45", "@cite_31", "@cite_34" ], "mid": [ "2251131401", "1412698887", "1594229598", "1971678616", "1575907248", "2792781829", "2160513510" ], "abstract": [ "Languages using Chinese characters are mostly processed at word level. Inspired by recent success of deep learning, we delve deeper to character and radical levels for Chinese language processing. We propose a new deep learning technique, called “radical embedding”, with justifications based on Chinese linguistics, and validate its feasibility and utility through a set of three experiments: two in-house standard experiments on short-text categorization (STC) and Chinese word segmentation (CWS), and one in-field experiment on search ranking. We show that radical embedding achieves comparable, and sometimes even better, results than competing methods.", "A Chinese sentence is represented as a sequence of characters, and words are not separated from each other. In statistical machine translation, the conventional approach is to segment the Chinese character sequence into words during the pre-processing. The training and translation are performed afterwards. However, this method is not optimal for two reasons: 1. The segmentations may be erroneous. 2. For a given character sequence, the best segmentation depends on its context and translation. In order to minimize the translation errors, we take different segmentation alternatives instead of a single segmentation into account and integrate the segmentation process with the search for the best translation. The segmentation decision is only taken during the generation of the translation. With this method we are able to translate Chinese text at the character level. The experiments on the IWSLT 2005 task showed improvements in the translation performance using two translation systems: a phrase-based system and a finite state transducer based system. For the phrase-based system, the improvement of the BLEU score is 1.5 absolute.", "In this paper, we present a method to leverage radical for learning Chinese character embedding. Radical is a semantic and phonetic component of Chinese character. It plays an important role for modelling character semantics as characters with the same radical usually have similar semantic meaning and grammatical usage. However, most existing character (or word) embedding learning algorithms typically only model the syntactic contexts but ignore the radical information. As a result, they do not explicitly capture the inner semantic connections of characters via radical into the embedding space of characters. To solve this problem, we propose to incorporate the radical information for enhancing the Chinese character embedding. We present a dedicated neural architecture with a hybrid loss function, and integrate the radical information through softmax upon each character. To verify the effectiveness of the learned character embedding, we apply it on Chinese word segmentation. Experiment results on two benchmark datasets show that, our radical-enhanced method outperforms two widely-used context-based embedding learning algorithms.", "Chinese word segmentation is an active area in Chinese language processing though it is suffering from the argument about what precisely is a word in Chinese. Based on corpus-based segmentation standard, we launched this study. In detail, we regard Chinese word segmentation as a character-based tagging problem. We show that there has been a potent trend of using a character-based tagging approach in this field. In particular, learning from segmented corpus with or without additional linguistic resources is treated in a unified way in which the only difference depends on how the feature template set is selected. It differs from existing work in that both feature template selection and tag set selection are considered in our approach, instead of the previous feature template focus only technique. We show that there is a significant performance difference as different tag sets are selected. This is especially applied to a six-tag set, which is good enough for most current segmented corpora. The linguistic meaning of a tag set is also discussed. Our results show that a simple learning system with six n-gram feature templates and a six-tag set can obtain competitive performance in the cases of learning only from a training corpus. In cases when additional linguistic resources are available, an ensemble learning technique, assistant segmenter, is proposed and its effectiveness is verified. Assistant segmenter is also proven to be an effective method as segmentation standard adaptation that outperforms existing ones. Based on the proposed approach, our system provides state-of-the-art performance in all 12 corpora of three international Chinese word segmentation bakeoffs.", "Chinese part-of-speech (POS) tagging assigns one POS tag to each word in a Chinese sentence. However, since words are not demarcated in a Chinese sentence, Chinese POS tagging requires word segmentation as a prerequisite. We could perform Chinese POS tagging strictly after word segmentation (one-at-a-time approach), or perform both word segmentation and POS tagging in a combined, single step simultaneously (all-atonce approach). Also, we could choose to assign POS tags on a word-by-word basis, making use of word features in the surrounding context (word-based), or on a character-by-character basis with character features (character-based). This paper presents an in-depth study on such issues of processing architecture and feature representation for Chinese POS tagging, within a maximum entropy framework. We found that while the all-at-once, characterbased approach is the best, the one-at-a-time, character-based approach is a worthwhile compromise, performing only slightly worse in terms of accuracy, but taking shorter time to train and run. As part of our investigation, we also built a state-of-the-art Chinese word segmenter, which outperforms the best SIGHAN 2003 word segmenters in the closed track on 3 out of 4 test corpora.", "We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images. While optical character recognition (OCR) in document images is well studied and many commercial tools are available, detection and recognition of text in natural images is still a challenging problem, especially for more complicated character sets such as Chinese text. Lack of training data has always been a problem, especially for deep learning methods which require massive training data. In this paper we provide details of a newly created dataset of Chinese text with about 1 million Chinese characters annotated by experts in over 30 thousand street view images. This is a challenging dataset with good diversity. It contains planar text, raised text, text in cities, text in rural areas, text under poor illumination, distant text, partially occluded text, etc. For each character in the dataset, the annotation includes its underlying character, its bounding box, and 6 attributes. The attributes indicate whether it has complex background, whether it is raised, whether it is handwritten or printed, etc. The large size and diversity of this dataset make it suitable for training robust neural networks for various tasks, particularly detection and recognition. We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5 top-1 accuracy, while YOLOv2 achieves an mAP of 71.0 on detection. Dataset, source code and trained models will all be publicly available on the website.", "We address the issue of consuming heterogeneous annotation data for Chinese word segmentation and part-of-speech tagging. We empirically analyze the diversity between two representative corpora, i.e. Penn Chinese Treebank (CTB) and PKU's People's Daily (PPD), on manually mapped data, and show that their linguistic annotations are systematically different and highly compatible. The analysis is further exploited to improve processing accuracy by (1) integrating systems that are respectively trained on heterogeneous annotations to reduce the approximation error, and (2) re-training models with high quality automatically converted data to reduce the estimation error. Evaluation on the CTB and PPD data shows that our novel model achieves a relative error reduction of 11 over the best reported result in the literature." ] }
1901.07880
2913243196
The Chinese pronunciation system offers two characteristics that distinguish it from other languages: deep phonemic orthography and intonation variations. We are the first to argue that these two important properties can play a major role in Chinese sentiment analysis. Particularly, we propose two effective features to encode phonetic information. Next, we develop a Disambiguate Intonation for Sentiment Analysis (DISA) network using a reinforcement network. It functions as disambiguating intonations for each Chinese character (pinyin). Thus, a precise phonetic representation of Chinese is learned. Furthermore, we also fuse phonetic features with textual and visual features in order to mimic the way humans read and understand Chinese text. Experimental results on five different Chinese sentiment analysis datasets show that the inclusion of phonetic features significantly and consistently improves the performance of textual and visual representations and outshines the state-of-the-art Chinese character level representations.
Sentiment analysis has raised growing interest both within the scientific community, leading to many exciting open challenges, as well as in the business world, due to the remarkable benefits to be had from financial @cite_28 and political @cite_15 forecasting, user profiling @cite_26 and community detection @cite_4 , manufacturing and supply chain applications @cite_43 , human communication comprehension @cite_55 and dialogue systems @cite_6 , etc. Various directions have been actively explored in the past few year, from document level @cite_7 @cite_46 , to sentence level @cite_17 @cite_30 and to aspect level @cite_11 @cite_32 . Most methods took a high perspective to develop effective models for a broad spectrum of languages. Only a limited number of works spend efforts in studying language-specific characteristics @cite_31 @cite_23 @cite_8 . Among them, there is almost no literature trying to take advantage of phonetic information for Chinese representation. We, however, believe the Chinese phonetic information could of great value to the representation and sentiment analysis of Chinese language, due to but not limited to the following evidence.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_26", "@cite_4", "@cite_11", "@cite_7", "@cite_8", "@cite_28", "@cite_46", "@cite_55", "@cite_32", "@cite_6", "@cite_43", "@cite_23", "@cite_15", "@cite_17" ], "mid": [ "2034090215", "2790250716", "2017489100", "2167660864", "2160250477", "774598698", "198736415", "2251939518", "2215376118", "2160513510", "2251900677", "1489003673", "2492922441", "1961993270", "2131305515", "2252127909" ], "abstract": [ "Sentiment analysis, also called opinion mining, is a form of information extraction from text of growing research and commercial interest. In this paper we present our machine learning experiments with regard to sentiment analysis in blog, review and forum texts found on the World Wide Web and written in English, Dutch and French. We train from a set of example sentences or statements that are manually annotated as positive, negative or neutral with regard to a certain entity. We are interested in the feelings that people express with regard to certain consumption products. We learn and evaluate several classification models that can be configured in a cascaded pipeline. We have to deal with several problems, being the noisy character of the input texts, the attribution of the sentiment to a particular entity and the small size of the training set. We succeed to identify positive, negative and neutral feelings to the entity under consideration with ca. 83 accuracy for English texts based on unigram features augmented with linguistic features. The accuracy results of processing the Dutch and French texts are ca. 70 and 68 respectively due to the larger variety of the linguistic expressions that more often diverge from standard language, thus demanding more training patterns. In addition, our experiments give us insights into the portability of the learned models across domains and languages. A substantial part of the article investigates the role of active learning techniques for reducing the number of examples to be manually annotated.", "Abstract Aspect-based sentiment analysis aims at identifying sentiment polarity towards aspect targets in a sentence. Previously, the task was modeled as a sentence-level sentiment classification problem that treated aspect targets as a hint. Such approaches oversimplify the problem by averaging word embeddings when the aspect target is a multi-word sequence. In this paper, we formalize the problem from a different perspective, i.e., that sentiment at aspect target level should be the main focus. Due to the fact that written Chinese is very rich and complex, Chinese aspect targets can be studied at three different levels of granularity: radical, character and word. Thus, we propose to explicitly model the aspect target and conduct sentiment classification directly at the aspect target level via three granularities. Moreover, we study two fusion methods for such granularities in the task of Chinese aspect-level sentiment analysis. Experimental results on a multi-word aspect target subset from SemEval2014 and four Chinese review datasets validate our claims and show the improved performance of our model over the state of the art.", "Sentiment analysis is the natural language processing task dealing with sentiment detection and classification from texts. In recent years, due to the growth in the quantity and fast spreading of user-generated contents online and the impact such information has on events, people and companies worldwide, this task has been approached in an important body of research in the field. Despite different methods having been proposed for distinct types of text, the research community has concentrated less on developing methods for languages other than English. In the above-mentioned context, the present work studies the possibility to employ machine translation systems and supervised methods to build models able to detect and classify sentiment in languages for which less no resources are available for this task when compared to English, stressing upon the impact of translation quality on the sentiment classification performance. Our extensive evaluation scenarios show that machine translation systems are approaching a good level of maturity and that they can, in combination to appropriate machine learning algorithms and carefully chosen features, be used to build sentiment analysis systems that can obtain comparable performances to the one obtained for English.", "The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine translation services are used for eliminating the language gap between the training set and test set, and English features and Chinese features are considered as two independent views of the classification problem. We propose a cotraining approach to making use of unlabeled Chinese data. Experimental results show the effectiveness of the proposed approach, which can outperform the standard inductive classifiers and the transductive classifiers.", "Sentiment analysis often relies on a semantic orientation lexicon of positive and negative words. A number of approaches have been proposed for creating such lexicons, but they tend to be computationally expensive, and usually rely on significant manual annotation and large corpora. Most of these methods use WordNet. In contrast, we propose a simple approach to generate a high-coverage semantic orientation lexicon, which includes both individual words and multi-word expressions, using only a Roget-like thesaurus and a handful of affixes. Further, the lexicon has properties that support the Polyanna Hypothesis. Using the General Inquirer as gold standard, we show that our lexicon has 14 percentage points more correct entries than the leading WordNet-based high-coverage lexicon (SentiWordNet). In an extrinsic evaluation, we obtain significantly higher performance in determining phrase polarity using our thesaurus-based lexicon than with any other. Additionally, we explore the use of visualization techniques to gain insight into the our algorithm beyond the evaluations mentioned above.", "Scarcity of annotated corpora for many languages is a bottleneck for training finegrained sentiment analysis models that can tag aspects and subjective phrases. We propose to exploit statistical machine translation to alleviate the need for training data by projecting annotated data in a source language to a target language such that a supervised fine-grained sentiment analysis system can be trained. To avoid a negative influence of poor-quality translations, we propose a filtering approach based on machine translation quality estimation measures to select only high-quality sentence pairs for projection. We evaluate on the language pair German English on a corpus of product reviews annotated for both languages and compare to in-target-language training. Projection without any filtering leads to 23 F1 in the task of detecting aspect phrases, compared to 41 F1 for in-target-language training. Our approach obtains up to 47 F1. Further, we show that the detection of subjective phrases is competitive to in-target-language training without filtering.", "There is a growing interest in mining opinions using sentiment analysis methods from sources such as news, blogs and product reviews. Most of these methods have been developed for English and are difficult to generalize to other languages. We explore an approach utilizing state-of-the-art machine translation technology and perform sentiment analysis on the English translation of a foreign language text. Our experiments indicate that (a) entity sentiment scores obtained by our method are statistically significantly correlated across nine languages of news sources and five languages of a parallel corpus; (b) the quality of our sentiment analysis method is largely translator independent; (c) after applying certain normalization techniques, our entity sentiment scores can be used to perform meaningful cross-cultural comparisons.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "With the advent of Web 2.0, people became more eager to express and share their opinions on web regarding day-to-day activities and global issues as well. Evolution of social media has also contributed immensely to these activities, thereby providing us a transparent platform to share views across the world. These electronic Word of Mouth (eWOM) statements expressed on the web are much prevalent in business and service industry to enable customer to share his her point of view. In the last one and half decades, research communities, academia, public and service industries are working rigorously on sentiment analysis, also known as, opinion mining, to extract and analyze public mood and views. In this regard, this paper presents a rigorous survey on sentiment analysis, which portrays views presented by over one hundred articles published in the last decade regarding necessary tasks, approaches, and applications of sentiment analysis. Several sub-tasks need to be performed for sentiment analysis which in turn can be accomplished using various approaches and techniques. This survey covering published literature during 2002-2015, is organized on the basis of sub-tasks to be performed, machine learning and natural language processing techniques used and applications of sentiment analysis. The paper also presents open issues and along with a summary table of a hundred and sixty-one articles.", "We address the issue of consuming heterogeneous annotation data for Chinese word segmentation and part-of-speech tagging. We empirically analyze the diversity between two representative corpora, i.e. Penn Chinese Treebank (CTB) and PKU's People's Daily (PPD), on manually mapped data, and show that their linguistic annotations are systematically different and highly compatible. The analysis is further exploited to improve processing accuracy by (1) integrating systems that are respectively trained on heterogeneous annotations to reduce the approximation error, and (2) re-training models with high quality automatically converted data to reduce the estimation error. Evaluation on the CTB and PPD data shows that our novel model achieves a relative error reduction of 11 over the best reported result in the literature.", "We propose a novel approach to sentiment analysis for a low resource setting. The intuition behind this work is that sentiment expressed towards an entity, targeted sentiment, may be viewed as a span of sentiment expressed across the entity. This representation allows us to model sentiment detection as a sequence tagging problem, jointly discovering people and organizations along with whether there is sentiment directed towards them. We compare performance in both Spanish and English on microblog data, using only a sentiment lexicon as an external resource. By leveraging linguisticallyinformed features within conditional random fields (CRFs) trained to minimize empirical risk, our best models in Spanish significantly outperform a strong baseline, and reach around 90 accuracy on the combined task of named entity recognition and sentiment prediction. Our models in English, trained on a much smaller dataset, are not yet statistically significant against their baselines.", "The internet has made it feasible to tap a continuous stream of public sentiment from the world wide web, quite literally permitting one to \"feel the pulse\" of any issue under consideration. We present a methodology for real time sentiment extraction in the domain of finance. With the advent of the web, there has been a sharp increase in the influence of individuals on the stock market via web-based trading and the posting of sentiment to stock message boards. While it is importantto capture this \"sentiment\" of small investors, as yet, no index of sentiment has been compiled. This paper comprises (a) a technology for extracting small investor sentiment from web sources to create an index, and (b) illustrative applications of the methodology. We make use of computerized natural language and statistical algorithms for the automated classification of messages posted on the web. We design a suite of classification algorithms, each of different theoretical content, with a view to characterizing the sentiment of any single posting to a message board. The use of multiple methods allows imposition of voting rules in the classification process. It also enables elimination of \"fuzzy\" messages which are better off uninterpreted. A majority rule across algorithms vastly improves classification accuracy, but also leads to a natural increase in the number of messages classified as \"fuzzy\". The classifier achieves an accuracy of 62 (versus a random classification accuracy of 33 ), and compares favorably against human agreement on message classification, which was 72 . The technology is computationally efficient, allowing the access and interpretations of thousands of messages within minutes. Our illustrative applications show evidence of a strong link between market movements and sentiment. Based on approximately 25,000 messages for the last quarter of 2000, we found evidence that sentiment is based on stock movements.", "With the advent of Internet, people actively express their opinions about products, services, events, political parties, etc., in social media, blogs, and website comments. The amount of research work on sentiment analysis is growing explosively. However, the majority of research efforts are devoted to English-language data, while a great share of information is available in other languages. We present a state-of-the-art review on multilingual sentiment analysis. More importantly, we compare our own implementation of existing approaches on common data. Precision observed in our experiments is typically lower than the one reported by the original authors, which we attribute to the lack of detail in the original presentation of those approaches. Thus, we compare the existing works by what they really offer to the reader, including whether they allow for accurate implementation and for reliable reproduction of the reported results.", "Recognizing negative and speculative information is highly relevant for sentiment analysis. This paper presents a machine-learning approach to automatically detect this kind of information in the review domain. The resulting system works in two steps: in the first pass, negation speculation cues are identified, and in the second phase the full scope of these cues is determined. The system is trained and evaluated on the Simon Fraser University Review corpus, which is extensively used in opinion mining. The results show how the proposed method outstrips the baseline by as much as roughly 20 in the negation cue detection and around 13 in the scope recognition, both in terms of F1. In speculation, the performance obtained in the cue prediction phase is close to that obtained by a human rater carrying out the same task. In the scope detection, the results are also promising and represent a substantial improvement on the baseline up by roughly 10 . A detailed error analysis is also provided. The extrinsic evaluation shows that the correct identification of cues and scopes is vital for the task of sentiment analysis.", "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "Cross-Lingual Sentiment Analysis (CLSA) is the task of predicting the polarity of the opinion expressed in a text in a language Ltest using a classifier trained on the corpus of another language Lt rain. Popular approaches use Machine Translation (MT) to convert the test document in Ltest to Lt rain and use the classifier of Lt rain. However, MT systems do not exist for most pairs of languages and even if they do, their translation accuracy is low. So we present an alternative approach to CLSA using WordNet senses as features for supervised sentiment classification. A document in Ltest is tested for polarity through a classifier trained on sense marked and polarity labeled corpora of Lt rain. The crux of the idea is to use the linked WordNets of two languages to bridge the language gap. We report our results on two widely spoken Indian languages, Hindi (450 million speakers) and Marathi (72 million speakers), which do not have an MT system between them. The sense-based approach gives a CLSA accuracy of 72 and 84 for Hindi and Marathi sentiment classification respectively. This is an improvement of 14 -15 over an approach that uses a bilingual dictionary." ] }
1901.07880
2913243196
The Chinese pronunciation system offers two characteristics that distinguish it from other languages: deep phonemic orthography and intonation variations. We are the first to argue that these two important properties can play a major role in Chinese sentiment analysis. Particularly, we propose two effective features to encode phonetic information. Next, we develop a Disambiguate Intonation for Sentiment Analysis (DISA) network using a reinforcement network. It functions as disambiguating intonations for each Chinese character (pinyin). Thus, a precise phonetic representation of Chinese is learned. Furthermore, we also fuse phonetic features with textual and visual features in order to mimic the way humans read and understand Chinese text. Experimental results on five different Chinese sentiment analysis datasets show that the inclusion of phonetic features significantly and consistently improves the performance of textual and visual representations and outshines the state-of-the-art Chinese character level representations.
Shu and Anderson conducted a study on Chinese phonetic awareness in @cite_0 . The study involved 113 participants of Chinese 2nd, 4th, and 6th graders enrolled in a working-class Beijing, China elementary school. Their task was to represent the pronunciation of 60 semantic phonetic compound characters. Results showed that children as young as 2nd graders are better able to represent the pronunciation of regular characters than irregular characters or characters with bound phonetics.
{ "cite_N": [ "@cite_0" ], "mid": [ "2001970020" ], "abstract": [ "This study investigated the development of phonetic awareness, meaning insight into the structure and function of the component of Chinese characters that gives a clue to pronunciation. Participants were 113 Chinese 2nd, 4th, and 6th graders enrolled in a working-class Beijing, China elementary school. The children's task was to represent the pronunciation of 60 semantic phonetic compound characters. As anticipated, both character familiarity and character regularity strongly influenced performance. Children as young as 2nd graders are better able to represent the pronunciation of regular characters than irregular characters or characters with bound phonetics. Phonetic awareness continues to develop over the elementary school years, as is shown by the increasing influence of phonetic regularity on the performance of children in higher grades and the increasing percentage of phonetic-related errors among older children." ] }
1901.07914
2913315775
Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. In times of mass-customization, low setup times including the instructing specifying of new tasks are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. The proposed approach covers dual-arm and even multi-arm robots as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and task-specific combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. The proposed task model is robot independent and thus can easily be deployed to other robotic platforms. Flexibility and portability of our proposed model is validated through several experiments on different simulated robot platforms. We benchmarked our search strategy against a general-purpose heuristic. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools and ROS requires less than a minute to compute usable plans.
The state-of-the-art optimal TAPF method @cite_15 cannot solve STAAMS problems in general, as it does not compute kinematically feasible motions for agents, nor can it be applied in cases requiring ordering decisions about task assignments. Online methods for multi-agent task assignment and scheduling algorithms have been developed for small-sized teams of agents, and highly flexible against execution uncertainty @cite_23 . Multi-robot task allocation with temporal ordering constraints has been studied in the context of integrating auction-based methods with Simple Temporal Problems @cite_4 . These methods, however, do not account for conflicting spatial interactions, as needed, for example, in dual-arm manipulations. The applications of CP to multi-robot task planning and scheduling often use a simplified robot motion model, and ignore the cost of spatial interaction among robots in the scheduling process @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_4", "@cite_23" ], "mid": [ "1988994102", "42384017", "2784322399", "2482025661" ], "abstract": [ "This paper presents a strategy for planning robot motions in dynamic, uncertain environments (DUEs). Successful and efficient robot operation in such environments requires reasoning about the future evolution and uncertainties of the states of the moving agents and obstacles. A novel procedure to account for future information gathering (and the quality of that information) in the planning process is presented. To approximately solve the stochastic dynamic programming problem that is associated with DUE planning, we present a partially closed-loop receding horizon control algorithm whose solution integrates prediction, estimation, and planning while also accounting for chance constraints that arise from the uncertain locations of the robot and obstacles. Simulation results in simple static and dynamic scenarios illustrate the benefit of the algorithm over classical approaches. The approach is also applied to more complicated scenarios, including agents with complex, multimodal behaviors, basic robot-agent interaction, and agent information gathering.", "Existing approaches to multirobot coordination separate scheduling and task allocation, but finding the optimal schedule with joint tasks and spatial constraints requires robots to simultaneously solve the scheduling, task allocation, and path planning problems. We present a formal description of the multirobot joint task allocation problem with heterogeneous capabilities and spatial constraints and an instantiation of the problem for the search and rescue domain. We introduce a novel declarative framework for modeling the problem as a mixed integer linear programming (MILP) problem and present a centralized anytime algorithm with error bounds. We demonstrate that our algorithm can outperform standard MILP solving techniques, greedy heuristics, and a market based approach which separates scheduling and task allocation.", "We present an efficient algorithm for multi-robot motion planning from linear temporal logic (LTL) specifications. We assume that the dynamics of each robot can be described by a discrete-time, linear system together with constraints on the control inputs and state variables. Given an LTL formula specifying the multi-robot mission, our goal is to construct a set of collision-free trajectories for all robots, and the associated control strategies, to satisfy We show that the motion planning problem can be formulated as the feasibility problem for a formula p over Boolean and convex constraints, respectively capturing the LTL specification and the robot dynamics. We then adopt a satisfiability modulo convex (SMC) programming approach that exploits a monotonicity property of p to decompose the problem into smaller subproblems. Simulation results show that our algorithm is more than one order of magnitude faster than state-of-the-art sampling-based techniques for high-dimensional state spaces while supporting complex missions.", "We study the TAPF (combined target-assignment and path-finding) problem for teams of agents in known terrain, which generalizes both the anonymous and non-anonymous multi-agent path-finding problems. Each of the teams is given the same number of targets as there are agents in the team. Each agent has to move to exactly one target given to its team such that all targets are visited. The TAPF problem is to first assign agents to targets and then plan collision-free paths for the agents to their targets in a way such that the makespan is minimized. We present the CBM (Conflict-Based Min-Cost-Flow) algorithm, a hierarchical algorithm that solves TAPF instances optimally by combining ideas from anonymous and non-anonymous multi-agent path-finding algorithms. On the low level, CBM uses a min-cost max-flow algorithm on a time-expanded network to assign all agents in a single team to targets and plan their paths. On the high level, CBM uses conflict-based search to resolve collisions among agents in different teams. Theoretically, we prove that CBM is correct, complete and optimal. Experimentally, we show the scalability of CBM to TAPF instances with dozens of teams and hundreds of agents and adapt it to a simulated warehouse system." ] }
1901.07914
2913315775
Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. In times of mass-customization, low setup times including the instructing specifying of new tasks are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. The proposed approach covers dual-arm and even multi-arm robots as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and task-specific combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. The proposed task model is robot independent and thus can easily be deployed to other robotic platforms. Flexibility and portability of our proposed model is validated through several experiments on different simulated robot platforms. We benchmarked our search strategy against a general-purpose heuristic. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools and ROS requires less than a minute to compute usable plans.
@cite_2 present a survey about the task sequencing problem for industrial robots, where sources for execution variants are systematically identified for a given task specification (e.g., multiple inverse kinematic solutions, partial ordering) and optimized based on various cost functions. The survey, however, lacks the coverage for tasks that are applicable for multi-arm robots. @cite_22 schedule paint strokes by ignoring the dependency between the ordering of the strokes and their motions. This approach is not generalizable to multi-robot scenarios, as this dependency cannot be ignored due to robot-robot collisions. Representing task orders can be done via hierarchical task networks (HTN) @cite_12 . However, HTN would not by itself be capable of generating the orderings in the plan in a way that is optimized, or even feasible from the point of view of the robot's geometry or motions.
{ "cite_N": [ "@cite_22", "@cite_12", "@cite_2" ], "mid": [ "2395113136", "2418222507", "2086550339" ], "abstract": [ "We consider the multi-robot task allocation (MRTA) problem in an initially unknown environment. The objective of the MRTA problem is to find a schedule or sequence of tasks that should be performed by a set of robots so that the cost or energy expended by the robots is minimized. Existing solutions for the MRTA problem mainly concentrate on finding an efficient task allocation among robots, without directly incorporating changes to tasks’ costs originating from changes in robots’ paths due to dynamically detected obstacles while moving between tasks. Dynamically updating path costs is an important aspect as changing path costs can alter the task sequence for robots that corresponds to the minimum cost. In this paper, we attempt to address this problem by developing an algorithm called MRTA-RTPP (MRTA with Real-time Path Planning) by integrating a greedy MRTA algorithm for task planning with a Field D*-based path planning algorithm. Our technique is capable of handling dynamic changes in a robot’s path costs due to static as well as mobile obstacles and computes a new task schedule if the original schedule is no longer optimal due to the robots’ replanned paths. We have verified our proposed technique on physical Corobot robots that perform surveillancelike tasks by visiting a set of locations. Our experimental results show that that our MRTA technique is able to handle dynamic path changes while reducing the cost of the schedule to the robots.", "To enable safe and efficient human–robot collaboration in shared workspaces, it is important for the robot to predict how a human will move when performing a task. While predicting human motion for tasks not known a priori is very challenging, we argue that single-arm reaching motions for known tasks in collaborative settings (which are especially relevant for manufacturing) are indeed predictable. Two hypotheses underlie our approach for predicting such motions: First, that the trajectory the human performs is optimal with respect to an unknown cost function, and second, that human adaptation to their partner's motion can be captured well through iterative replanning with the above cost function. The key to our approach is thus to learn a cost function that “explains” the motion of the human. To do this, we gather example trajectories from pairs of participants performing a collaborative assembly task using motion capture. We then use inverse optimal control to learn a cost function from these trajectories. Finally, we predict reaching motions from the human's current configuration to a task-space goal region by iteratively replanning a trajectory using the learned cost function. Our planning algorithm is based on the trajectory optimizer: stochastic trajectory optimizer for motion planning [1] ; it plans for a 23-degree-of-freedom human kinematic model and accounts for the presence of a moving collaborator and obstacles in the environment. Our results suggest that in most cases, our method outperforms baseline methods when predicting motions. We also show that our method outperforms baselines for predicting human motion when a human and a robot share the workspace.", "In this paper, we investigate the problem of scheduling a 6 DOF robotic arm to carry out a sequence of spray painting tasks. The duration of any given painting task is process dependent and fixed, but the duration of an ''intertask'', corresponding to the process of relocating and reorienting the robot arm from one painting task to the next one, is influenced by the order of tasks and must be minimized by the scheduler. There are multiple solutions for reaching any given painting task and tasks can be performed in either of two different directions. Further complicating the problem are characteristics of the painting process application itself. Unlike spot-welding, painting tasks require movement of the entire robot arm. In addition to minimizing intertask duration, the scheduler must strive to maximize painting quality and the problem is formulated as a multi-objective optimization problem. The scheduling model is implemented as a stand-alone module using constraint programming, and integrated with a larger automatic system. The results of a number of simulation experiments with simple parts are reported, both to characterize the functionality of the scheduler and to illustrate the operation of the entire software system for automatic generation of robot programs for painting." ] }
1901.08019
2912987408
We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.
The first neural network defined to explicitly maximize the information between the input and the hidden layer was proposed by Linsker @cite_5 , giving to the objective function the name, . This model is a linear model that actually maximizes only the entropy of the representation, performing Principal Component Analysis (PCA) in the case of Gaussian distributed data. The same objective function was applied by Bell and Sejnowsky @cite_9 , showing that a 1-layer neural network with sigmoid activation performs Independent Component Analysis (ICA).
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2411252390", "782077188" ], "abstract": [ "The outputs of a trained neural network contain much richer information than just an one-hot classifier. For example, a neural network might give an image of a dog the probability of one in a million of being a cat but it is still much larger than the probability of being a car. To reveal the hidden structure in them, we apply two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes. The PCA ICA embedding of the object classes reveals their visual similarity and the PCA ICA components can be interpreted as common visual features shared by similar object classes. For an application, we proposed a new zero-shot learning method, in which the visual features learned by PCA ICA are employed. Our zero-shot learning method achieves the state-of-the-art results on the ImageNet of over 20000 classes.", "The outputs of a trained neural network contain much richer information than just an one-hot classifier. For example, a neural network might give an image of a dog the probability of one in a million of being a cat but it is still much larger than the probability of being a car. To reveal the hidden structure in them, we apply two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes. The PCA ICA embedding of the object classes reveals their visual similarity and the PCA ICA components can be interpreted as common visual features shared by similar object classes. For an application, we proposed a new zero-shot learning method, in which the visual features learned by PCA ICA are employed. Our zero-shot learning method achieves the state-of-the-art results on the ImageNet of over 20000 classes." ] }
1901.08019
2912987408
We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.
Both these models are quite restrictive, indeed they work only under the assumption that the visible data is a linear combination of hidden features. A more general way to extract hidden features is given by the Autoencoder (AE), a NN that is a composition of an encoder and a decoder map, respectively @math and @math . This model can be seen as a generalization of the PCA model, because in the assumption @math and @math are linear maps, the space spanned by @math is the same of the one spanned by principal components, see. e.g. @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2127979711" ], "abstract": [ "This paper proposes to use autoencoders with nonlinear dimensionality reduction in the anomaly detection task. The authors apply dimensionality reduction by using an autoencoder onto both artificial data and real data, and compare it with linear PCA and kernel PCA to clarify its property. The artificial data is generated from Lorenz system, and the real data is the spacecrafts' telemetry data. This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by extending them to denoising autoenconders. Moreover, autoencoders can be useful as nonlinear techniques without complex computation as kernel PCA requires. Finaly, the authors examine the learned features in the hidden layer of autoencoders, and present that autoencoders learn the normal state properly and activate differently with anomalous input." ] }
1901.08019
2912987408
We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.
An information theoretic description of an AE was given by @cite_4 where, with restrictive assumptions, they observed that reducing the reconstruction loss of an AE is related to maximizing the mutual information between the visible and hidden variables.
{ "cite_N": [ "@cite_4" ], "mid": [ "2134092622" ], "abstract": [ "Many interesting domains in machine learning can be viewed as networks, with relationships (e.g., friendships) connecting items (e.g., individuals). The Active Exploration (AE) task is to identify all items in a network with a desired trait (i.e., positive labels) given only partial information about the network. The AE process iteratively queries for labels or network structure within a limited budget; thus, accurate predictions prior to making each query is critical to maximizing the number of positives gathered. However, the targeted AE query process produces partially observed networks that can create difficulties for predictive modeling. In particular, we demonstrate that these partial networks can exhibit extreme label correlation bias, which makes it difficult for conventional relational learning methods to accurately estimate relational parameters. To overcome this issue, we model the joint distribution of possible edges and labels to improve learning and inference. Our proposed method, Probabilistic Relational Expectation Maximization (PR-EM), is the first AE approach to accurately learn the complex dependencies between attributes, labels, and structure to improve predictions. PR-EM utilizes collective inference over the missing relationships in the partial network to jointly infer unknown item traits. Further, we develop a linear inference algorithm to facilitate efficient use of PR-EM in large networks. We test our approach on four real world networks, showing that AE with PR-EM gathers significantly more positive items compared to state-of-the-art methods." ] }
1901.08019
2912987408
We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.
The IMAE, as we will see in the next section, is an AE that is able to learn a robust representation. In the literature there are many models in this family that are defined to learn a good representation, see e.g. @cite_13 and references therein. For practical reasons, in the following section we compare the IMAE with the AEs that are recognized to learn the best features: Denoising, Contractive and Variational Autoencoders.
{ "cite_N": [ "@cite_13" ], "mid": [ "2110750681" ], "abstract": [ "Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches." ] }
1901.08101
2950236485
Can faces acquired by low-cost depth sensors be useful to catch some characteristic details of the face? Typically the answer is no. However, new deep architectures can generate RGB images from data acquired in a different modality, such as depth data. In this paper, we propose a new , trained on annotated RGB-D face datasets, effective for a face-to-face translation from depth to RGB. Although the network cannot reconstruct the exact somatic features for unknown individual faces, it is capable to reconstruct plausible faces; their appearance is accurate enough to be used in many pattern recognition tasks. In fact, we test the network capability to hallucinate with some , as for instance face aspect classification or landmark detection. Depth face can be used in spite of the correspondent RGB images, that often are not available due to difficult luminance conditions. Experimental results are very promising and are as far as better than previously proposed approaches: this domain translation can constitute a new way to exploit depth data in new future applications.
GANs have been defined very recently and tested in several contexts. Our work is inspired by the first idea of Goodfellow , of Generative Adversarial Networks @cite_8 with some variant in terms of conditional and discriminative GANs. GANs have been successfully used for Image-to-Image translation; they have been initially presented in @cite_19 and then applied to some contexts as unpaired image to image translation @cite_17 . A previous work starting from the same depth images has designed an Autoencoder to create gray-level faces from depth, with the final goal of head estimation @cite_11 . An extension of this work is performed in @cite_6 where a GAN is trained for the same final goal. In this paper, we compare our architecture with @cite_19 , using a similar dataset, other datasets, and with some probe perceptual tasks.
{ "cite_N": [ "@cite_8", "@cite_17", "@cite_6", "@cite_19", "@cite_11" ], "mid": [ "2963444790", "2608015370", "2739540493", "2963163163", "2766091292" ], "abstract": [ "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently [7, 8, 21, 12, 4, 18]. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation [23], we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "Despite the promising results on paired unpaired image-to-image translation achieved by Generative Adversarial Networks (GANs), prior works often only transfer the low-level information (e.g. color or texture changes), but fail to manipulate high-level semantic meanings (e.g., geometric structure or content) of different object regions. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, aiming at modifying the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow ( )sheep, motor ( )bicycle, cat ( )dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective which is able to perform all types of semantic translations with one category-conditional generator. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Extensive qualitative and quantitative experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs.", "Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGANs) aimed at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of a scene based on a given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and the text description as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and multiple discriminators arranged in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images.", "Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images." ] }
1901.08101
2950236485
Can faces acquired by low-cost depth sensors be useful to catch some characteristic details of the face? Typically the answer is no. However, new deep architectures can generate RGB images from data acquired in a different modality, such as depth data. In this paper, we propose a new , trained on annotated RGB-D face datasets, effective for a face-to-face translation from depth to RGB. Although the network cannot reconstruct the exact somatic features for unknown individual faces, it is capable to reconstruct plausible faces; their appearance is accurate enough to be used in many pattern recognition tasks. In fact, we test the network capability to hallucinate with some , as for instance face aspect classification or landmark detection. Depth face can be used in spite of the correspondent RGB images, that often are not available due to difficult luminance conditions. Experimental results are very promising and are as far as better than previously proposed approaches: this domain translation can constitute a new way to exploit depth data in new future applications.
The latest spread of high-quality, cheap and accurate commercial depth sensors has encouraged the researchers of the computer vision community. Depth data are a useful source of information especially for systems that have to work in presence of darkness or dramatic light changes. Besides, recent depth sensors usually exploit infrared lights instead of lasers, so their use is safer for humans. In the literature, the potentiality of depth images used as input for deep learning approaches has not been fully investigated yet. Only recently, Convolutional Neural Networks (CNNs) and depth maps have been exploited for various tasks, like head pose estimation @cite_11 @cite_5 , facial landmark detection @cite_9 , head detection and obstacle detection @cite_1 . Various type of deep architecture have been investigated, like LSTM @cite_4 or Siamese networks @cite_12 . The importance of this source of information is proved by the presence of works that aim to retrieve depth starting from monocular RGB images @cite_18 @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_1", "@cite_5", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2888634437", "2796461436", "2760112017", "1985912834", "2950619061", "2810477386", "2416791088", "2910870181" ], "abstract": [ "Dense depth cues are important and have wide applications in various computer vision tasks. In autonomous driving, LIDAR sensors are adopted to acquire depth measurements around the vehicle to perceive the surrounding environments. However, depth maps obtained by LIDAR are generally sparse because of its hardware limitation. The task of depth completion attracts increasing attention, which aims at generating a dense depth map from an input sparse depth map. To effectively utilize multi-scale features, we propose three novel sparsity-invariant operations, based on which, a sparsity-invariant multi-scale encoder-decoder network (HMS-Net) for handling sparse inputs and sparse feature maps is also proposed. Additional RGB features could be incorporated to further improve the depth completion performance. Our extensive experiments and component analysis on two public benchmarks, KITTI depth completion benchmark and NYU-depth-v2 dataset, demonstrate the effectiveness of the proposed approach. As of Aug. 12th, 2018, on KITTI depth completion leaderboard, our proposed model without RGB guidance ranks first among all peer-reviewed methods without using RGB information, and our model with RGB guidance ranks second among all RGB-guided methods.", "We present a deep model that can accurately produce dense depth maps given an RGB image with known depth at a very sparse set of pixels. The model works simultaneously for both indoor outdoor scenes and produces state-of-the-art dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI datasets. We surpass the state-of-the-art for monocular depth estimation even with depth values for only 1 out of every ( 10000 ) image pixels, and we outperform other sparse-to-dense depth methods at all sparsity levels. With depth values for (1 256 ) of the image pixels, we achieve a mean error of less than (1 ) of actual depth on indoor scenes, comparable to the performance of consumer-grade depth sensor hardware. Our experiments demonstrate that it would indeed be possible to efficiently transform sparse depth measurements obtained using e.g. lower-power depth sensors or SLAM systems into high-quality dense depth maps.", "We consider the problem of dense depth prediction from a sparse set of depth measurements and a single RGB image. Since depth estimation from monocular images alone is inherently ambiguous and unreliable, to attain a higher level of robustness and accuracy, we introduce additional sparse depth samples, which are either acquired with a low-resolution depth sensor or computed via visual Simultaneous Localization and Mapping (SLAM) algorithms. We propose the use of a single deep regression network to learn directly from the RGB-D raw data, and explore the impact of number of depth samples on prediction accuracy. Our experiments show that, compared to using only RGB images, the addition of 100 spatially random depth samples reduces the prediction root-mean-square error by 50 on the NYU-Depth-v2 indoor dataset. It also boosts the percentage of reliable prediction from 59 to 92 on the KITTI dataset. We demonstrate two applications of the proposed algorithm: a plug-in module in SLAM to convert sparse maps to dense maps, and super-resolution for LiDARs. Software and video demonstration are publicly available.", "Human activity understanding with 3D depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition.", "We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions, etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.", "Depth estimation has achieved considerable success with the development of the depth sensor devices and deep learning method. However, depth estimation from monocular RGB-based image will increase ambiguity and is prone to error. In this paper, we present a novel approach to produce dense depth map from a single image coupled with coarse point-cloud samples. Our approach learns to fit the distribution of the depth map from source data using conditional adversarial networks and convert the sparse point clouds to dense maps. Our experiments show that the use of the conditional adversarial networks can add full image information to the predicted depth maps and the effectiveness of our approach to predict depth in NYU-Depth-v2 indoor dataset.", "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network.", "Depth sensing is crucial for 3D reconstruction and scene understanding. Active depth sensors provide dense metric measurements, but often suffer from limitations such as restricted operating ranges, low spatial resolution, sensor interference, and high power consumption. In this paper, we propose a deep learning (DL) method to estimate per-pixel depth and its uncertainty continuously from a monocular video stream, with the goal of effectively turning an RGB camera into an RGB-D camera. Unlike prior DL-based methods, we estimate a depth probability distribution for each pixel rather than a single depth value, leading to an estimate of a 3D depth probability volume for each input frame. These depth probability volumes are accumulated over time under a Bayesian filtering framework as more incoming frames are processed sequentially, which effectively reduces depth uncertainty and improves accuracy, robustness, and temporal stability. Compared to prior work, the proposed approach achieves more accurate and stable results, and generalizes better to new datasets. Experimental results also show the output of our approach can be directly fed into classical RGB-D based 3D scanning methods for 3D scene reconstruction." ] }
1901.07860
2914377850
Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces.
Bayesian Neural Networks (BNNs): There are several works on Bayesian methods for placing uncertainty on the approximator parameters @cite_39 @cite_40 . have used BNNs for learning MDP dynamics in RL tasks. In these works a fully factorized Gaussian distribution on parameters is assumed while we consider possible correlations between parameters. In addition, BNNs require sampling the parameters, and running several feed-forward runs for each of the parameters samples. Our incremental method avoids multiple samples of the parameters, since the uncertainty is propagated with every optimization update.
{ "cite_N": [ "@cite_40", "@cite_39" ], "mid": [ "2949496227", "2897001865" ], "abstract": [ "Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes equals the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors entailing rich structures, including Gaussian processes and implicit stochastic processes. Empirically, we find fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and scale to large datasets.", "Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We fix VB and turn it into a robust inference tool for Bayesian neural networks. We achieve this with two innovations: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate strong predictive performance over alternative approaches." ] }
1901.07860
2914377850
Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces.
Kalman filters: Outside of the RL framework, the use of Kalman filter as an optimization method is discussed in @cite_11 @cite_1 @cite_30 . solve the dynamics of each parameter with Kalman filtering. use Kalman filter for normalizing batches. In our work we use Kalman filtering for VF optimization in the context of RL. EKF is connected with the incremental Gauss-Newton method @cite_23 , and with the on-line natural gradient @cite_3 . These methods require inversing the @math -dimensional Fisher information matrix (for @math -dimensional parameter), thus require high computational resources. Our method avoids this inversion in the update step which is more computationally efficient.
{ "cite_N": [ "@cite_30", "@cite_1", "@cite_3", "@cite_23", "@cite_11" ], "mid": [ "1749494163", "2079511563", "2160337655", "2008336375", "2886639315" ], "abstract": [ "This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples.", "Extended Kalman filter (EKF) has been a popular choice to solve simultaneous localization and mapping (SLAM) problems for mobile robots or vehicles. However, the performance of the EKF depends on the correct a priori knowledge of process and sensor measurement noise covariance matrices (Q and R, respectively). Imprecise knowledge of these statistics can cause significant degradation in performance. The present paper proposes the development of a new neurofuzzy based adaptive Kalman filtering algorithm for simultaneous localization and mapping of mobile robots or vehicles, which attempts to estimate the elements of the R matrix of the EKF algorithm, at each sampling instant when a ldquomeasurement updaterdquo step is carried out. The neuro-fuzzy based supervision for the EKF algorithm is carried out with the aim of reducing the mismatch between the theoretical and the actual covariance of the innovation sequences. The free parameters of the neuro-fuzzy system are learned offline, by employing particle swarm optimization in the training phase, which configures the training problem as a high-dimensional stochastic optimization problem. By employing a mobile robot to localize and simultaneously acquire the map of the environment, under several benchmark environment situations with varying landmarks and under several conditions of wrong knowledge of sensor statistics, the performance of the proposed scheme has been evaluated. It has been successfully demonstrated that in each case, the neuro-fuzzy assistance is able to improve highly unpredictable, degrading performance of the EKF and can provide robust and accurate solutions.", "Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or \"particle\") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.", "In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.", "We specialize the decoupled extended Kalman filter (DEKF) for online parameter learning in factorization models, including factorization machines, matrix and tensor factorization, and illustrate the effectiveness of the approach through simulations. Learning model parameters through the DEKF makes factorization models more broadly useful by allowing for more flexible observations through the entire exponential family, modeling parameter drift, and producing parameter uncertainty estimates that can enable explore exploit and other applications. We use a more general dynamics of the parameters than the standard DEKF, allowing parameter drift while encouraging reasonable values. We also present an alternate derivation of the regular extended Kalman filter and DEKF that connects these methods to natural gradient methods, and suggests a similarly decoupled version of the iterated extended Kalman filter." ] }
1901.07860
2914377850
Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces.
Trust region for policies: The natural gradient method, when applied to RL tasks, is mostly used in policy gradient algorithms to estimate the parameters of the policy @cite_20 @cite_21 @cite_38 . Trust region methods in RL have been developed for parameterized policies @cite_38 @cite_31 . Despite that, trust region methods for parametrized VFs are rarely presented in the RL literature. Recently, suggested to apply the natural gradient method also on the critic in the actor-critic framework, using Kronecker-factored approximations. suggested to apply Gauss-Newton method to estimate the VF. However, they did not analyze and formalize the underlying model and assumptions that lead to the regularization in the objective function, while this is the focus in our work.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_21", "@cite_20" ], "mid": [ "2749928749", "2963313316", "1848094219", "2949608212" ], "abstract": [ "In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at this https URL", "In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also the method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https: github.com openai baselines", "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL.", "We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters." ] }
1901.07860
2914377850
Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces.
Distributional perspective on values and observations: Distributional RL @cite_6 treats the full (general) distribution of total return, and considers VF parameters as deterministic. In our work we assume Gaussian distribution over the total return and in addition Gaussian distribution over the VF parameters.
{ "cite_N": [ "@cite_6" ], "mid": [ "2132849848" ], "abstract": [ "We present a novel Bayesian approach to the problem of value function estimation in continuous state spaces. We define a probabilistic generative model for the value function by imposing a Gaussian prior over value functions and assuming a Gaussian noise model. Due to the Gaussian nature of the random processes involved, the posterior distribution of the value function is also Gaussian and is therefore described entirely by its mean and covariance. We derive exact expressions for the posterior process moments, and utilizing an efficient sequential sparsification method, we describe an on-line algorithm for learning them. We demonstrate the operation of the algorithm on a 2-dimensional continuous spatial navigation domain." ] }
1901.07860
2914377850
Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces.
Our work may be seen as a modern extension of GPTD @cite_37 @cite_5 for DRL domains with continuous state and action spaces. GPTD uses Gaussian Processes (GPs) for both VF and total return, for solving the RL problem of value estimation. We introduce here several improvements and generalizations over their work: (1) Our formulation is adapted to learning nonlinear VF approximations, as common in DRL; (2) We include a fading memory option for previous observations by using a decay factor in the error covariance prediction ( @math ); (3) We allow for a general observation noise covariance (not necessarily diagonal) and for a general noisy observations (not only 1-step TD errors); (4) Our observation vector @math has a fixed size @math (the batch size) as opposed to the growing size vectors in GPTD which grow for any new observation and make it difficult to train in DRL domains.
{ "cite_N": [ "@cite_5", "@cite_37" ], "mid": [ "2149273154", "2134540127" ], "abstract": [ "Gaussian process (GP) models are very popular for machine learning and regression and they are widely used to account for spatial or temporal relationships between multivariate random variables. In this paper, we propose a general formulation of underdetermined source separation as a problem involving GP regression. The advantage of the proposed unified view is first to describe the different underdetermined source separation problems as particular cases of a more general framework. Second, it provides a flexible means to include a variety of prior information concerning the sources such as smoothness, local stationarity or periodicity through the use of adequate covariance functions. Third, given the model, it provides an optimal solution in the minimum mean squared error (MMSE) sense to the source separation problem. In order to make the GP models tractable for very large signals, we introduce framing as a GP approximation and we show that computations for regularly sampled and locally stationary GPs can be done very efficiently in the frequency domain. These findings establish a deep connection between GP and nonnegative tensor factorizations (NTF) with the Itakura-Saito distance and lead to effective methods to learn GP hyperparameters for very large and regularly sampled signals.", "We present an implementation of model-based online reinforcement learning (RL) for continuous domains with deterministic transitions that is specifically designed to achieve low sample complexity. To achieve low sample complexity, since the environment is unknown, an agent must intelligently balance exploration and exploitation, and must be able to rapidly generalize from observations. While in the past a number of related sample efficient RL algorithms have been proposed, to allow theoretical analysis, mainly model-learners with weak generalization capabilities were considered. Here, we separate function approximation in the model learner (which does require samples) from the interpolation in the planner (which does not require samples). For model-learning we apply Gaussian processes regression (GP) which is able to automatically adjust itself to the complexity of the problem (via Bayesian hyperparameter selection) and, in practice, often able to learn a highly accurate model from very little data. In addition, a GP provides a natural way to determine the uncertainty of its predictions, which allows us to implement the \"optimism in the face of uncertainty\" principle used to efficiently control exploration. Our method is evaluated on four common benchmark domains." ] }
1901.08113
2913856657
Network modeling is a critical component for building self-driving Software-Defined Networks, particularly to find optimal routing schemes that meet the goals set by administrators. However, existing modeling techniques do not meet the requirements to provide accurate estimations of relevant performance metrics such as delay and jitter. In this paper we propose a novel Graph Neural Network (GNN) model able to understand the complex relationship between topology, routing and input traffic to produce accurate estimates of the per-source destination pair mean delay and jitter. GNN are tailored to learn and model information structured as graphs and as a result, our model is able to generalize over arbitrary topologies, routing schemes and variable traffic intensity. In the paper we show that our model provides accurate estimates of delay and jitter (worst case R2 = 0.86) when testing against topologies, routing and traffic not seen during training. In addition, we present the potential of the model for network operation by presenting several use-cases that show its effective use in per-source destination pair delay jitter routing optimization and its generalization capabilities by reasoning in topologies and routing schemes not seen during training.
Finally, an early attempt to use Graph Neural Networks for computer networks can be found in @cite_22 . In this case the authors use a GNN to learn shortest-path routing and max-min routing using supervised learning. While this approach is able to generalize to different topologies it cannot generalize to different routing schemes beyond the ones for which has been specifically trained. In addition the focus of the paper is not to estimate the performance of such routing schemes.
{ "cite_N": [ "@cite_22" ], "mid": [ "2116341502" ], "abstract": [ "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities." ] }
1901.07807
2905445635
Despite technological advances, most smart objects in the Internet of Things (IoT) cannot be accessed using technologies designed and developed for interacting with powerful Internet servers. IoT use cases involve devices that not only have limited resources, but also they are not always connected to the Internet and are physically exposed to tampering. In this paper, we describe the design, development, and evaluation of a smart contract-based solution that allows end-users to securely interact with smart devices. Our approach enables access control, Thing authentication, and payments in a fully decentralized setting, taking at the same time into consideration the limitations and constraints imposed by both blockchain technologies and the IoT paradigm. Our prototype implementation is based on existing technologies, i.e., Ethereum smart contracts, which makes it realistic and fundamentally secure.
Prior work on blockchain-assisted access control has proposed schemes that store access control policies in the blockchain. For example, @cite_4 use the Bitcoin blockchain to store Right Transfer Transactions'', i.e., a transaction that indicates that a user is allowed to access a particular resource. These transactions are then used by Policy Enforcement Points.'' @cite_3 use the Bitcoin blockchain to store access control polices to protect personal data. Similarly, @cite_6 store access control policies in the Bitcoin blockchain for controlling access to data produced by IoT devices. However, storing so sensitive information in the blockchain clearly constitutes a privacy and security threat. Even if we ignore the fact that blockchain should not be used for storing secrets'', the immutability of the blockchain may allow @math parties to deduce information about the access patterns of a particular user, or even about the security policies of a content owner.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_3" ], "mid": [ "2620085947", "2803736556", "2810011639" ], "abstract": [ "Access Control systems are used in computer security to regulate the access to critical or valuable resources. The rights of subjects to access such resources are typically expressed through access control policies, which are evaluated at access request time against the current access context. This paper proposes a new approach based on blockchain technology to publish the policies expressing the right to access a resource and to allow the distributed transfer of such right among users. In our proposed protocol the policies and the rights exchanges are publicly visible on the blockchain, consequently any user can know at any time the policy paired with a resource and the subjects who currently have the rights to access the resource. This solution allows distributed auditability, preventing a party from fraudulently denying the rights granted by an enforceable policy. We also show a possible working implementation based on XACML policies, deployed on the Bitcoin blockchain.", "Since the introduction of Bitcoin in 2008, blockchain systems have seen an enormous increase in adoption. By providing a persistent, distributed, and append-only ledger, blockchains enable numerous applications such as distributed consensus, robustness against equivocation, and smart contracts. However, recent studies show that blockchain systems such as Bitcoin can be (mis) used to store arbitrary content. This has already been used to store arguably objectionable content on Bitcoin's blockchain. Already single instances of clearly objectionable or even illegal content can put the whole system at risk by making its node operators culpable. To overcome this imminent risk, we survey and discuss the design space of countermeasures against the insertion of such objectionable content. Our analysis shows a wide spectrum of potential countermeasures, which are often combinable for increased efficiency. First, we investigate special-purpose content detectors as an ad hoc mitigation. As they turn out to be easily evadable, we also investigate content-agnostic countermeasures. We find that mandatory minimum fees as well as mitigation of transaction manipulability via identifier commitments significantly raise the bar for inserting harmful content into a blockchain.", "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible." ] }
1901.07807
2905445635
Despite technological advances, most smart objects in the Internet of Things (IoT) cannot be accessed using technologies designed and developed for interacting with powerful Internet servers. IoT use cases involve devices that not only have limited resources, but also they are not always connected to the Internet and are physically exposed to tampering. In this paper, we describe the design, development, and evaluation of a smart contract-based solution that allows end-users to securely interact with smart devices. Our approach enables access control, Thing authentication, and payments in a fully decentralized setting, taking at the same time into consideration the limitations and constraints imposed by both blockchain technologies and the IoT paradigm. Our prototype implementation is based on existing technologies, i.e., Ethereum smart contracts, which makes it realistic and fundamentally secure.
A growing body of work propose the use of custom blockhains in order to overcome similar challenges. For example, @cite_0 implement a custom made blockchain for a smart home application and consider per-home miners, which also act as trusted proxies for the home devices. Similarly, @cite_5 propose a blockchain solution that can be used for providing access control for IoT applications. Such approaches however, provided they are secure, require a critical mass of users that will adopt the proposed technology.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2606467208", "2624953603" ], "abstract": [ "There has been increasing interest in adopting BlockChain (BC), that underpins the crypto-currency Bitcoin, in Internet of Things (IoT) for security and privacy. However, BCs are computationally expensive and involve high bandwidth overhead and delays, which are not suitable for most IoT devices. This paper proposes a lightweight BC-based architecture for IoT that virtually eliminates the overheads of classic BC, while maintaining most of its security and privacy benefits. IoT devices benefit from a private immutable ledger, that acts similar to BC but is managed centrally, to optimize energy consumption. High resource devices create an overlay network to implement a publicly accessible distributed BC that ensures end-to-end security and privacy. The proposed architecture uses distributed trust to reduce the block validation processing time. We explore our approach in a smart home setting as a representative case study for broader IoT applications. Qualitative evaluation of the architecture under common threat models highlights its effectiveness in providing security and privacy for IoT applications. Simulations demonstrate that our method decreases packet and processing overhead significantly compared to the BC implementation used in Bitcoin.", "In the Internet of Things (IoT) scenario, the block-chain and, in general, Peer-to-Peer approaches could play an important role in the development of decentralized and dataintensive applications running on billion of devices, preserving the privacy of the users. Our research goal is to understand whether the blockchain and Peer-to-Peer approaches can be employed to foster a decentralized and private-by-design IoT. As a first step in our research process, we conducted a Systematic Literature Review on the blockchain to gather knowledge on the current uses of this technology and to document its current degree of integrity, anonymity and adaptability. We found 18 use cases of blockchain in the literature. Four of these use cases are explicitly designed for IoT. We also found some use cases that are designed for a private-by-design data management. We also found several issues in the integrity, anonymity and adaptability. Regarding anonymity, we found that in the blockchain only pseudonymity is guaranteed. Regarding adaptability and integrity, we discovered that the integrity of the blockchain largely depends on the high difficulty of the Proof-of-Work and on the large number of honest miners, but at the same time a difficult Proof-of-Work limits the adaptability. We documented and categorized the current uses of the blockchain, and provided a few recommendations for future work to address the above-mentioned issues." ] }
1901.07827
2912648642
The success of convolutional neural networks (CNNs) in computer vision applications has been accompanied by a significant increase of computation and memory costs, which prohibits its usage on resource-limited environments such as mobile or embedded devices. To this end, the research of CNN compression has recently become emerging. In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speedup the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries. Concretely, the proposed scheme incorporates two different regularizers of structured sparsity into the original objective function of filter pruning, which fully coordinates the global outputs and local pruning operations to adaptively prune filters. We further propose an Alternative Updating with Lagrange Multipliers (AULM) scheme to efficiently solve its optimization. AULM follows the principle of ADMM and alternates between promoting the structured sparsity of CNNs and optimizing the recognition loss, which leads to a very efficient solver (2.5x to the most recent work that directly solves the group sparsity-based regularization). Moreover, by imposing the structured sparsity, the online inference is extremely memory-light, since the number of filters and the output feature maps are simultaneously reduced. The proposed scheme has been deployed to a variety of state-of-the-art CNN structures including LeNet, AlexNet, VGG, ResNet and GoogLeNet over different datasets. Quantitative results demonstrate that the proposed scheme achieves superior performance over the state-of-the-art methods. We further demonstrate the proposed compression scheme for the task of transfer learning, including domain adaptation and object detection, which also show exciting performance gains over the state-of-the-arts.
Early works in network compression mainly focus on compressing the fully-connected layers @cite_44 @cite_10 @cite_17 @cite_68 @cite_12 . For instance, LeCun @cite_44 and Hassibi @cite_10 proposed a saliency measurement by computing the Hessian matrix of the loss function with respect to the parameters, based on which network parameters with low saliency values are pruned. Srinivas and Babu @cite_17 explored the redundancy among neurons to remove a subset of neurons without retraining. Han @cite_68 @cite_12 proposed a pruning scheme based on low-weight connections to reduce the total amount of parameters in CNNs. However, these methods only reduce the memory footprint and do not guarantee to reduce the computation time, since the time consumption is mostly dominated by the convolutional layers. Moreover, the above pruning schemes typically produce non-structured sparse CNNs that lack flexibility to be applied across different platforms or libraries. For example, the Compressed Sparse Column (CSC) based weight formation has to change the original format of weight storing in Caffe @cite_18 after pruning, which cannot be well supported across different platforms.
{ "cite_N": [ "@cite_18", "@cite_44", "@cite_68", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2788715907", "2515385951", "2962965870", "2619444510", "2891561769", "2964019666" ], "abstract": [ "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1 , 79.7 and 60.9 , respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "Convolutional neural networks (CNNs) have state-of-the-art performance on many problems in machine vision. However, networks with superior performance often have millions of weights so that it is difficult or impossible to use CNNs on computationally limited devices or to humanly interpret them. A myriad of CNN compression approaches have been proposed and they involve pruning and compressing the weights and filters. In this article, we introduce a greedy structural compression scheme that prunes filters in a trained CNN. We define a filter importance index equal to the classification accuracy reduction (CAR) of the network after pruning that filter (similarly defined as RAR for regression). We then iteratively prune filters based on the CAR index. This algorithm achieves substantially higher classification accuracy in AlexNet compared to other structural compression schemes that prune filters. Pruning half of the filters in the first or second layer of AlexNet, our CAR algorithm achieves 26 and 20 higher classification accuracies respectively, compared to the best benchmark filter pruning scheme. Our CAR algorithm, combined with further weight pruning and compressing, reduces the size of first or second convolutional layer in AlexNet by a factor of 42, while achieving close to original classification accuracy through retraining (or fine-tuning) network. Finally, we demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities. In fact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters as opposed to shape-selective filters. To our knowledge, this is the first reported result on the connection between compression and interpretability of CNNs.", "Deep Convolutional Neural Networks (CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely , to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies -- global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Two-phase pruning, that is, combining both global and layer-wise strategies, achieves 10 X FLOPs reduction and 46 inference time reduction on VGG-16, with 2 accuracy drop.", "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50 filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74 improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model." ] }
1901.07925
2953303055
With the rapid development of spaceborne imaging techniques, object detection in optical remote sensing imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called Optical Remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. An ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in the frequency domain and the original spatial channel features (e.g., color channel and gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne data sets are performed to demonstrate the superiority and effectiveness in comparison with the previous state-of-the-art methods.
Channel Features refer to a collection of spatially discriminative features by linear or non-linear transformations of the input image. Over the past decades, channel features extraction techniques have been received an increasing interest with successful applications in pedestrian detection @cite_0 @cite_52 and face detection @cite_39 @cite_4 @cite_31 . Owing to their high representation ability, a variety of channel features have been widely used in geospatial object detection. Tuermer @cite_15 utilized the Histogram of Oriented Gradients (HOG) @cite_24 as orientation channel features for airborne vehicle detection in a dense urban scene. Unfortunately, using orientation features alone is prone to hinder the detection performance from further improving. Inspired by the aggregate channel features (ACF) @cite_0 , Zhao @cite_57 extended the channel features by additionally considering color channel features (e.g., gray-scale, RGB, HSV and LUV) to detect aircrafts through remote sensing images. However, these methods usually fail to achieve desirable performances due to the sensitivity to object rotation. For that, although many tentative works have been proposed to model the object's rotation behavior @cite_21 @cite_5 , yet the performance gain is still limited by the discrete spatial coordinate system.
{ "cite_N": [ "@cite_4", "@cite_15", "@cite_21", "@cite_52", "@cite_39", "@cite_0", "@cite_24", "@cite_57", "@cite_5", "@cite_31" ], "mid": [ "2159386181", "2609658199", "2416791088", "2066916495", "2509749674", "2773003563", "2892998444", "2291533986", "2792161633", "2605117450" ], "abstract": [ "We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers.", "Suitable feature representation is essential for performing video analysis and understanding in applications within the smart surveillance domain. In this paper, we propose a novel spatiotemporal feature descriptor based on co-occurrence matrices computed from the optical flow magnitude and orientation. Our method, called Optical Flow Co-occurrence Matrices (OFCM), extracts a robust set of measures known as Haralick features to describe the flow patterns by measuring meaningful properties such as contrast, entropy and homogeneity of co-occurrence matrices to capture local space-time characteristics of the motion through the neighboring optical flow magnitude and orientation. We evaluate the proposed method on the action recognition problem by applying a visual recognition pipeline involving bag of local spatiotemporal features and SVM classification. The experimental results, carried on three well-known datasets (KTH, UCF Sports and HMDB51), demonstrate that OFCM outperforms the results achieved by several widely employed spatiotemporal feature descriptors such as HOF, HOG3D and MBH, indicating its suitability to be used as video representation.", "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network.", "Detecting small objects such as vehicles in satellite images is a difficult problem. Many features (such as histogram of oriented gradient, local binary pattern, scale-invariant featuretransform, etc.) have been used to improve the performance of object detection, but mostly in simple environments such as those on roads. proposed that no satisfactory accuracy has been achieved in complex environments such as the City of San Francisco. Deep convolutional neural networks (DNNs) can learn rich features from the training data automatically and has achieved state-of-the-art performance in many image classification databases. Though the DNN has shown robustness to distortion, it only extracts features of the same scale, and hence is insufficient to tolerate large-scale variance of object. In this letter, we present a hybrid DNN (HDNN), by dividing the maps of the last convolutional layer and the max-pooling layer of DNN into multiple blocks of variable receptive field sizes or max-pooling field sizes, to enable the HDNN to extract variable-scale features. Comparative experimental results indicate that our proposed HDNN significantly outperforms the traditional DNN on vehicle detection.", "Object detection in very high resolution (VHR) optical remote sensing images is one of the most fundamental but challenging problems in the field of remote sensing image analysis. As object detection is usually carried out in feature space, effective feature representation is very important to construct a high-performance object detection system. During the last decades, a great deal of effort has been made to develop various feature representations for the detection of different types of objects. Among various features developed for visual object detection, the histogram of oriented gradients (HOG) feature is maybe one of the most popular features that has been successfully applied to computer vision community. However, although the HOG feature has achieved great success in nature scene images, it is problematic to directly use it for object detection in optical remote sensing images because it is difficult to effectively handle the problem of object rotation variations. To explore a possible solution to the problem, this paper proposes a novel method to learn rotation-invariant HOG (RIHOG) features for object detection in optical remote sensing images. This is achieved by learning a rotation-invariant transformation model via optimizing a new objective function, which constrains the training samples before and after rotation to share the similar features to achieve rotation-invariance. In the experiments, we evaluate the proposed method on a publicly available 10-class VHR geospatial object detection dataset and comprehensive comparisons with state-of-the-arts demonstrate the effectiveness the proposed method.", "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods.", "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40 miss rate). Using new and accurate annotations, an MCF achieves 7.98 miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "Abstract In this paper, a vehicle color recognition method using lightweight convolutional neural network (CNN) is proposed. Firstly, a lightweight CNN network architecture is specifically designed for the recognition task, which contains five layers, i.e. three convolutional layers, a global pooling layer and a fully connected layer. Different from the existing CNN based methods that only use the features output from the final layer for recognition, in this paper, the feature maps of intermediate convolutional layers are all applied for recognition based on the fact that these convolutional features can provide hierarchical representations of the images. Spatial Pyramid Matching (SPM) strategy is adopted to divide the feature map, and each SPM sub-region is encoded to generate a feature representation vector. These feature representation vectors of convolutional layers and the output feature vector of the global pooling layer are normalized and cascaded as a whole feature vector, which is finally utilized to train Support Vector Machine classifier to obtain the recognition model. The experimental results show that, compared with the state-of-art methods, the proposed method can obtain more than 0.7 higher recognition accuracy, up to 95.41 , while the dimensionality of the feature vector is only 18 and the memory footprint is only 0.5 .", "Fine-grained car recognition aims to recognize the category information of a car, such as car make, car model, or even the year of manufacture. A number of recent studies have shown that a deep convolutional neural network (DCNN) trained on a large-scale data set can achieve impressive results at a range of generic object classification tasks. In this paper, we propose a spatially weighted pooling (SWP) strategy, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs. More specifically, the SWP is a novel pooling layer, which contains a predefined number of spatially weighted masks or pooling channels. The SWP pools the extracted features of DCNNs with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. As the existing methods that apply uniform grid pooling on the convolutional feature maps of DCNNs, the proposed method can extract the convolutional features and generate the pooling channels from a single DCNN. Thus minimal modification is needed in terms of implementation. Moreover, the parameters of the SWP layer can be learned in the end-to-end training process of the DCNN. By applying our method to several fine-grained car recognition data sets, we demonstrate that the proposed method can achieve better performance than recent approaches in the literature. We advance the state-of-the-art results by improving the accuracy from 92.6 to 93.1 on the Stanford Cars-196 data set and 91.2 to 97.6 on the recent CompCars data set. We have also tested the proposed method on two additional large-scale data sets with impressive results observed." ] }
1901.07925
2953303055
With the rapid development of spaceborne imaging techniques, object detection in optical remote sensing imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called Optical Remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. An ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in the frequency domain and the original spatial channel features (e.g., color channel and gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne data sets are performed to demonstrate the superiority and effectiveness in comparison with the previous state-of-the-art methods.
With a theoretical guarantee, Liu @cite_50 proposed a fourier histogram of oriented gradients (FourierHOG) with a rigorous mathematical proof. It models the rotation-invariant descriptor in a continuous frequency domain rather than in the discrete spatial domain using a Fourier-based convolutionally-manipulated tensor-valued transformation function @math . This function transfers the tensor-valued vectorized features (e.g., HOG @cite_38 ) to a scalar-valued representation, so as to make the features invariant with a maximized information gain. In contrast with HOG-like approaches that discretely compute the features (or descriptors) in the locally estimated coordinates from pose normalization, FourierHOG uses a smooth continuous function for fitting the statistical features in a continuous coordinate, as illustrated in Fig. . Furthermore, such a strategy can also avoid artifacts in the gradient binning and pose sampling of the HOG descriptor.
{ "cite_N": [ "@cite_38", "@cite_50" ], "mid": [ "2008213480", "2949212545" ], "abstract": [ "The histogram of oriented gradients (HOG) is widely used for image description and proves to be very effective. In many vision problems, rotation-invariant analysis is necessary or preferred. Popular solutions are mainly based on pose normalization or learning, neglecting some intrinsic properties of rotations. This paper presents a method to build rotation-invariant HOG descriptors using Fourier analysis in polar spherical coordinates, which are closely related to the irreducible representation of the 2D 3D rotation groups. This is achieved by considering a gradient histogram as a continuous angular signal which can be well represented by the Fourier basis (2D) or spherical harmonics (3D). As rotation-invariance is established in an analytical way, we can avoid discretization artifacts and create a continuous mapping from the image to the feature space. In the experiments, we first show that our method outperforms the state-of-the-art in a public dataset for a car detection task in aerial images. We further use the Princeton Shape Benchmark and the SHREC 2009 Generic Shape Benchmark to demonstrate the high performance of our method for similarity measures of 3D shapes. Finally, we show an application on microscopic volumetric data.", "The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of @math HOG based on the auto-differentiation toolbox Chumpy and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR pipeline. Both applications improve on the respective state-of-the-art HOG approaches." ] }
1901.07925
2953303055
With the rapid development of spaceborne imaging techniques, object detection in optical remote sensing imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called Optical Remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. An ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in the frequency domain and the original spatial channel features (e.g., color channel and gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne data sets are performed to demonstrate the superiority and effectiveness in comparison with the previous state-of-the-art methods.
In the field of machine learning, the boosting methods have been widely used with great success for decades in various applications, e.g. object detection @cite_53 @cite_28 @cite_59 , face detection @cite_30 , and pose detection @cite_19 @cite_58 . Unlike other powerful classifiers (e.g., Rotation-based SVM @cite_11 , structured SVM @cite_35 , rotation forest @cite_29 ), the boosting-based ones iteratively select weak learners from a pool of candidate weak classifiers to deal with hard examples from the previous round, which can be treated as an enhanced model integrating former results and greedily minimizing an exponential loss function. Each weak learner is able to make the sample reweighed, then latter weak learners would more focus on those examples that are misclassified by former ones. Using this, a strong classifier can be learned with higher generalization ability and parameter adaptiveness.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_28", "@cite_53", "@cite_29", "@cite_19", "@cite_59", "@cite_58", "@cite_11" ], "mid": [ "2011143997", "2118373237", "2024046085", "2494236530", "2737691244", "1601728199", "2100807570", "2103000819", "2773012335" ], "abstract": [ "Boosting-based object detection has received significant attention recently. In this paper, we propose totally corrective asymmetric boosting algorithms for real-time object detection. Our algorithms differ from Viola and Jones' detection framework in two ways. Firstly, our boosting algorithms explicitly optimize asymmetric loss of objectives, while AdaBoost used by Viola and Jones optimizes a symmetric loss. Secondly, by carefully deriving the Lagrange duals of the optimization problems, we design more efficient boosting in that the coefficients of the selected weak classifiers are updated in a totally corrective fashion, in contrast to the stagewise optimization commonly used by most boosting algorithms. Column generation is employed to solve the proposed optimization problems. Unlike conventional boosting, the proposed boosting algorithms are able to de-select those irrelevant weak classifiers in the ensemble while training a classification cascade. This results in improved detection performance as well as fewer weak classifiers in the learned strong classifier. Compared with AsymBoost of Viola and Jones, our proposed asymmetric boosting is nonheuristic and the training procedure is much simpler. Experiments on face and pedestrian detection demonstrate that our methods have superior detection performance than some of the state-of-the-art object detectors.", "Training a cascade-based face detector using boosting and Haar features is computationally expensive, often requiring weeks on single CPU machines. The bottleneck is at training and selecting Haar features for a single weak classifier, currently in minutes. Traditional techniques for training a weak classifier usually run in 0(NT log N), with N examples (approximately 10,000), and T features (approximately 40,000). We present a method to train a weak classifier in time 0(Nd2 + T), where d is the number of pixels of the probed image sub-window (usually from 350 to 500), by using only the statistics of the weighted input data. Experimental results revealed a significantly reduced training time of a weak classifier to the order of seconds. In particular, this method suffers very minimal immerse in training time with very large increases in members of Haar features, enjoying a significant gain in accuracy, even with reduced training time.", "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.", "In a weakly-supervised scenario object detectors need to be trained using image-level annotation alone. Since bounding-box-level ground truth is not available, most of the solutions proposed so far are based on an iterative, Multiple Instance Learning framework in which the current classifier is used to select the highest-confidence boxes in each image, which are treated as pseudo-ground truth in the next training iteration. However, the errors of an immature classifier can make the process drift, usually introducing many of false positives in the training dataset. To alleviate this problem, we propose in this paper a training protocol based on the self-paced learning paradigm. The main idea is to iteratively select a subset of images and boxes that are the most reliable, and use them for training. While in the past few years similar strategies have been adopted for SVMs and other classifiers, we are the first showing that a self-paced approach can be used with deep-network-based classifiers in an end-to-end training pipeline. The method we propose is built on the fully-supervised Fast-RCNN architecture and can be applied to similar architectures which represent the input image as a bag of boxes. We show state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013. On ILSVRC 2013 our results based on a low-capacity AlexNet network outperform even those weakly-supervised approaches which are based on much higher-capacity networks.", "In this paper, we study the problem of training large-scale face identification model with imbalanced training data. This problem naturally exists in many real scenarios including large-scale celebrity recognition, movie actor annotation, etc. Our solution contains two components. First, we build a face feature extraction model, and improve its performance, especially for the persons with very limited training samples, by introducing a regularizer to the cross entropy loss for the multi-nomial logistic regression (MLR) learning. This regularizer encourages the directions of the face features from the same class to be close to the direction of their corresponding classification weight vector in the logistic regression. Second, we build a multi-class classifier using MLR on top of the learned face feature extraction model. Since the standard MLR has poor generalization capability for the one-shot classes even if these classes have been oversampled, we propose a novel supervision signal called underrepresented-classes promotion loss, which aligns the norms of the weight vectors of the one-shot classes (a.k.a. underrepresented-classes) to those of the normal classes. In addition to the original cross entropy loss, this new loss term effectively promotes the underrepresented classes in the learned model and leads to a remarkable improvement in face recognition performance. We test our solution on the MS-Celeb-1M low-shot learning benchmark task. Our solution recognizes 94.89 of the test images at the precision of 99 for the one-shot classes. To the best of our knowledge, this is the best performance among all the published methods using this benchmark task with the same setup, including all the participants in the recent MS-Celeb-1M challenge at ICCV 2017.", "Many learning tasks for computer vision problems can be described by multiple views or multiple features. These views can be exploited in order to learn from unlabeled data, a.k.a. \"multi-view learning\". In these methods, usually the classifiers iteratively label each other a subset of the unlabeled data and ignore the rest. In this work, we propose a new multi-view boosting algorithm that, unlike other approaches, specifically encodes the uncertainties over the unlabeled samples in terms of given priors. Instead of ignoring the unlabeled samples during the training phase of each view, we use the different views to provide an aggregated prior which is then used as a regularization term inside a semisupervised boosting method. Since we target multi-class applications, we first introduce a multi-class boosting algorithm based on maximizing the mutli-class classification margin. Then, we propose our multi-class semisupervised boosting algorithm which is able to use priors as a regularization component over the unlabeled data. Since the priors may contain a significant amount of noise, we introduce a new loss function for the unlabeled regularization which is robust to noisy priors. Experimentally, we show that the multi-class boosting algorithms achieves state-of-theart results in machine learning benchmarks. We also show that the new proposed loss function is more robust compared to other alternatives. Finally, we demonstrate the advantages of our multi-view boosting approach for object category recognition and visual object tracking tasks, compared to other multi-view learning methods.", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "Boosting combines a set of moderately accurate weak classifiers to form a highly accurate predictor. Compared with binary boosting classification, multi-class boosting received less attention. We propose a novel multi-class boosting formulation here. Unlike most previous multi-class boosting algorithms which decompose a multi-boost problem into multiple independent binary boosting problems, we formulate a direct optimization method for training multi-class boosting. Moreover, by explicitly deriving the La-grange dual of the formulated primal optimization problem, we design totally-corrective boosting using the column generation technique in convex optimization. At each iteration, all weak classifiers' weights are updated. Our experiments on various data sets demonstrate that our direct multi-class boosting achieves competitive test accuracy compared with state-of-the-art multi-class boosting in the literature.", "In this paper, we focus on improving the proposal classification stage in the object detection task and present implicit negative sub-categorization and sink diversion to lift the performance by strengthening loss function in this stage. First, based on the observation that the “background” class is generally very diverse and thus challenging to be handled as a single indiscriminative class in existing state-of-the-art methods, we propose to divide the background category into multiple implicit sub-categories to explicitly differentiate diverse patterns within it. Second, since the ground truth class inevitably has low-value probability scores for certain images, we propose to add a “sink” class and divert the probabilities of wrong classes to this class when necessary, such that the ground truth label will still have a higher probability than other wrong classes even though it has low probability output. Additionally, we propose to use dilated convolution, which is widely used in the semantic segmentation task, for efficient and valuable context information extraction. Extensive experiments on PASCAL VOC 2007 and 2012 data sets show that our proposed methods based on faster R-CNN implementation can achieve state-of-the-art mAPs, i.e., 84.1 , 82.6 , respectively, and obtain 2.5 improvement on ILSVRC DET compared with that of ResNet." ] }
1901.07542
2911511992
In almost every election cycle, the validity of the United States Electoral College is brought into question. The 2016 Presidential Election again brought up the issue of a candidate winning the popular vote but not winning the Electoral College, with Hillary Clinton receiving close to three million more votes than Donald Trump. However, did the popular vote actually determine the most liked candidate in the election? In this paper, we demonstrate that different voting policies can alter which candidate is elected. Additionally, we explore the trade-offs between each of these mechanisms. Finally, we introduce two novel mechanisms with the intent of electing the least polarizing candidate.
Significant research has been done by Stanford University scholars like Andy Hall, Shanto Iyengar, Bruce Cain, and David Brady, on polarization and election systems. Hall, Political Science Professor at Stanford, claims that "U.S. legislatures have become increasingly polarized and dysfunctional in part because of how difficult running for and holding office is." Combining theoretical and empirical evidence, he "shows how voters are forced to elect extremists because moderates don't run for office, and document[s] how the rising costs of running for office, and the falling benefits of holding office, have deterred moderates from running." @cite_1 Moreover, Hall has researched results in primaries @cite_14 , and concludes that extremists and their affiliated party tend to suffer penalties in the general election. To Hall, the general election system tends to have a moderating effect on elections in the House of Representatives in America. Similarly, Shanto Iyengar, another Political Science Professor, provides compelling evidence to support growing polarization in America. @cite_8 Iyengar's research raises important questions, many of which are beyond the scope of this paper.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_8" ], "mid": [ "2793302648", "2138174666", "2611831414" ], "abstract": [ "Political observers, campaign experts, and academics alike argue bitterly over whether it is more important for a party to capture ideologically moderate swing voters or to encourage turnout among hardcore partisans. The behavioral literature in American politics suggests that voters are not informed enough, and are too partisan, to be swing voters, while the institutional literature suggests that moderate candidates tend to perform better. We speak to this debate by examining the link between the ideology of congressional candidates and the turnout of their parties’ bases in US House races, 2006–2014. Combining a regression discontinuity design in close primary races with survey and administrative data on individual voter turnout, we find that extremist nominees—as measured by the mix of campaign contributions they receive—suffer electorally, largely because they decrease their party’s share of turnout in the general election, skewing the electorate towards their opponent’s party. The results help show how the behavioral and institutional literatures can be connected. For our sample of elections, turnout appears to be the dominant force in determining election outcomes, but it advantages ideologically moderate candidates because extremists appear to activate the opposing party’s base more than their own.", "This article studies the interplay of U.S. primary and general elections. I examine how the nomination of an extremist changes general-election outcomes and legislative behavior in the U.S. House, 1980 - 2010, using a regression discontinuity design in primary elections. When an extremist - as measured by primary-election campaign receipt patterns - wins a \"coin-flip\" election over a more moderate candidate, the party's general-election vote share decreases by approximately 9 -13 percentage points, and the probability that the party wins the seat decreases by 35 - 54 percentage points. This electoral penalty is so large that nominating the more extreme primary candidate causes the district's subsequent roll-call representation to reverse, becoming more liberal when an extreme Republican is nominated and more conservative when an extreme Democrat is nominated. Overall, the findings show how general-election voters act as a moderating filter in response to primary nominations.", "The American Voter - a seminal work in political science - uncovered the multifaceted nature of voting behavior which has been corroborated in electoral research for decades since. In this paper, we leverage The American Voter as an analysis framework in the realm of computational political science, employing the factors of party, personality, and policy to structure the analysis of public discourse on online social media during the 2016 U.S. presidential primaries. Our analysis of 50 million tweets reveals the continuing importance of these three factors; our understanding is also enriched by the application of sentiment analysis techniques. The overwhelmingly negative sentiment of conversations surrounding 10 major presidential candidates reveals more \"crosstalk\" from Democratic leaning users towards Republican candidates, and less vice-versa. We uncover the lack of moderation as the most discussed personality dimension during this campaign season, as the political field becomes more extreme - Clinton and Rubio are perceived as moderate, while Trump, Sanders, and Cruz are not. While the most discussed issues are foreign policy and immigration, Republicans tweet more about abortion than Democrats who tweet more about gay rights than Republicans. Finally, we illustrate the importance of multifaceted political discourse analysis by applying regression to quantify the impact of party, personality, and policy on national polls." ] }
1901.07621
2912660065
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability. For this reason, the game's state- and action-space is often abstracted (i.e. simplified) for CFR, and the resulting strategy is then translated back to the full game, which requires extensive expert-knowledge and often converges to highly exploitable policies. A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge. In this paper, we introduce Single Deep CFR (SD-CFR), a simplified variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network. We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker.
was not the first algorithm that used deep learning with the goal of solving large games efficiently. @cite_23 applies function approximation to estimate regret values in CFR and CFR @math . Unfortunately, despite promising expectations, recent work failed to apply R-CFR in combination with sampling @cite_2 . @cite_9 is similar to R-CFR but was only applied to single-player environments. Nevertheless, ARM did show that regret-based methods can be of interest in multi-agent imperfect information games much bigger, less structured, and more chaotic than poker, thereby opening up interesting lines of research in multi-agent reinforcement learning.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_2" ], "mid": [ "2395326189", "1543121739", "2886549473" ], "abstract": [ "In this thesis, we investigate the problem of decision-making in large two-player zero-sum games using Monte Carlo sampling and regret minimization methods. We demonstrate four major contributions. The first is Monte Carlo Counterfactual Regret Minimization (MC-CFR): a generic family of sample-based algorithms that compute near-optimal equilibrium strategies. Secondly, we develop a theory for applying counterfactual regret minimization to a generic subset of imperfect recall games as well as a lossy abstraction mechanism for reducing the size of very large games. Thirdly, we describe Monte Carlo Minimax Search (MCMS): an adversarial search algorithm based on *-Minimax that uses sparse sampling. We then present variance reduction techniques that can be used in these settings, with a focused application to Monte Carlo Tree Search (MCTS). We thoroughly evaluate our algorithms in practice using several different domains and sampling strategies.", "In two-player zero-sum games, if both players minimize their average external regret, then the average of the strategy profiles converges to a Nash equilibrium. For n-player general-sum games, however, theoretical guarantees for regret minimization are less understood. Nonetheless, Counterfactual Regret Minimization (CFR), a popular regret minimization algorithm for extensive-form games, has generated winning three-player Texas Hold'em agents in the Annual Computer Poker Competition (ACPC). In this paper, we provide the first set of theoretical properties for regret minimization algorithms in non-zero-sum games by proving that solutions eliminate iterative strict domination. We formally define in extensive-form games, show that CFR avoids iteratively strictly dominated actions and strategies, and demonstrate that removing iteratively dominated actions is enough to win a mock tournament in a small poker game. In addition, for two-player non-zero-sum games, we bound the worst case performance and show that in practice, regret minimization can yield strategies very close to equilibrium. Our theoretical advancements lead us to a new modification of CFR for games with more than two players that is more efficient and may be used to generate stronger strategies than previously possible. Furthermore, we present a new three-player Texas Hold'em poker agent that was built using CFR and a novel game decomposition method. Our new agent wins the three-player events of the 2012 ACPC and defeats the winning three-player programs from previous competitions while requiring less resources to generate than the 2011 winner. Finally, we show that our CFR modification computes a strategy of equal quality to our new agent in a quarter of the time of standard CFR using half the memory.", "We consider a class of restless multi-armed bandit (RMAB) problems with unknown arm dynamics. At each time, a player chooses an arm out of @math arms to play, referred to as an active arm, and receives a random reward from a finite set of reward states. The reward state of the active arm transits according to an unknown Markovian dynamic. The reward state of passive arms (which are not chosen to play at time t) evolves according to an arbitrary unknown random process. The objective is an arm-selection policy that minimizes the regret, defined as the reward loss with respect to a player that always plays the most rewarding arm. This class of RMAB problems has been studied recently in the context of communication networks and financial investment applications. We develop a strategy that selects arms to be played in a consecutive manner in which the selection sequencing rules are adaptively updated controlled by the current sample reward means, referred to as Adaptive Sequencing Rules (ASR) algorithm. By designing judiciously the adaptive sequencing rules of the chosen arms, we show that ASR algorithm achieves a logarithmic regret order with time and a finite-sample bound on the regret is established. Although existing methods have shown a logarithmic regret order with time in this RMAB setting, the theoretical analysis presents significant improvement in the regret scaling with respect to the system parameters under ASR. Extensive simulation results support the theoretical study and demonstrate strong performance of the algorithm as compared to existing methods." ] }
1901.07621
2912660065
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability. For this reason, the game's state- and action-space is often abstracted (i.e. simplified) for CFR, and the resulting strategy is then translated back to the full game, which requires extensive expert-knowledge and often converges to highly exploitable policies. A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge. In this paper, we introduce Single Deep CFR (SD-CFR), a simplified variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network. We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker.
A very successful application of deep learning to CFR in the domain of poker is @cite_4 , an algorithm that was able to defeat professional poker players in head-to-head gameplay in the game of Heads-Up No-Limit Hold'em Poker (HUNL) with statistical significance. However, while DeepStack works very well in poker, it still relies on tabular CFR methods to generate data for its counterfactual value networks and for re-solving. This could be too expensive in environments, where sample complexity is a concern. Furthermore, because DeepStack does not use any card abstraction, it would have difficulties handling games with many more private information states than HUNL has.
{ "cite_N": [ "@cite_4" ], "mid": [ "2574978968" ], "abstract": [ "Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker, the quintessential game of imperfect information, is a long-standing challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect-information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated, with statistical significance, professional poker players in heads-up no-limit Texas hold’em. The approach is theoretically sound and is shown to produce strategies that are more difficult to exploit than prior approaches." ] }
1901.07621
2912660065
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability. For this reason, the game's state- and action-space is often abstracted (i.e. simplified) for CFR, and the resulting strategy is then translated back to the full game, which requires extensive expert-knowledge and often converges to highly exploitable policies. A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge. In this paper, we introduce Single Deep CFR (SD-CFR), a simplified variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network. We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker.
To the best of our knowledge, @cite_30 was the first algorithm to soundly apply deep reinforcement learning (deep RL) from single trajectory samples to large extensive-form games. While not showing record-breaking results in terms of exploitability, NFSP was able to learn a competitive strategy in Limit Texas Hold'em Poker over just 14 GPU days.
{ "cite_N": [ "@cite_30" ], "mid": [ "2793035934" ], "abstract": [ "Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire NVIDIA DGX-1 to learn successful strategies in Atari games in single-digit minutes, using both synchronous and asynchronous algorithms." ] }
1901.07621
2912660065
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability. For this reason, the game's state- and action-space is often abstracted (i.e. simplified) for CFR, and the resulting strategy is then translated back to the full game, which requires extensive expert-knowledge and often converges to highly exploitable policies. A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge. In this paper, we introduce Single Deep CFR (SD-CFR), a simplified variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network. We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker.
Recent literature elaborates on the convergence properties of multi-agent deep reinforcement learning (deep RL) @cite_14 . A theoretical discussion around the relationship between game theoretic approaches to finding Nash equilibria and the convergence of actor-critic reinforcement learning methods in multi-agent imperfect information games @cite_2 lead to three new policy optimization algorithms, namely , , and .
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2904455790", "361876" ], "abstract": [ "Despite the recent advances of deep reinforcement learning (DRL), agents trained by DRL tend to be brittle and sensitive to the training environment, especially in the multi-agent scenarios. In the multi-agent setting, a DRL agent’s policy can easily get stuck in a poor local optima w.r.t. its training partners – the learned policy may be only locally optimal to other agents’ current policies. In this paper, we focus on the problem of training robust DRL agents with continuous actions in the multi-agent learning setting so that the trained agents can still generalize when its opponents’ policies alter. To tackle this problem, we proposed a new algorithm, MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) with the following contributions: (1) we introduce a minimax extension of the popular multi-agent deep deterministic policy gradient algorithm (MADDPG), for robust policy learning; (2) since the continuous action space leads to computational intractability in our minimax learning objective, we propose Multi-Agent Adversarial Learning (MAAL) to efficiently solve our proposed formulation. We empirically evaluate our M3DDPG algorithm in four mixed cooperative and competitive multi-agent environments and the agents trained by our method significantly outperforms existing baselines.", "This paper investigates the problem of policy learning in multiagent environments using the stochastic game framework, which we briefly overview. We introduce two properties as desirable for a learning agent when in the presence of other learning agents, namely rationality and convergence. We examine existing reinforcement learning algorithms according to these two properties and notice that they fail to simultaneously meet both criteria. We then contribute a new learning algorithm, WoLF policy hillclimbing, that is based on a simple principle: “learn quickly while losing, slowly while winning.” The algorithm is proven to be rational and we present empirical results for a number of stochastic games showing the algorithm converges." ] }
1901.07683
2913745806
Existing method generates class activation map (CAM) by a set of fixed classes (i.e., using all the classes), while the discriminative cues between class pairs are not considered. Note that activation maps by considering different class pair are complementary, and therefore can provide more discriminative cues to overcome the shortcoming of the existing CAM generation that the highlighted regions are usually local part regions rather than global object regions due to the lack of object cues. In this paper, we generate CAM by using a few of representative classes, with aim of extracting more discriminative cues by considering each class pair to obtain CAM more globally. The advantages are twofold. Firstly, the representative classes are able to obtain activation regions that are complementary to each other, and therefore leads to generating activation map more accurately. Secondly, we only need to consider a small number of representative classes, making the CAM generation suitable for small networks. We propose a clustering based method to select the representative classes. Multiple binary classification models rather than a multiple class classification model are used to generate the CAM. Moreover, we propose a multi-layer fusion based CAM generation method to simultaneously combine high-level semantic features and low-level detail features. We validate the proposed method on the PASCAL VOC and COCO database in terms of segmentation groundtruth. Various networks such as classical network (Resnet-50, Resent-101 and Resnet-152) and small network (VGG-19, Resnet-18 and Mobilenet) are considered. Experimental results show that the proposed method improves the CAM generation obviously.
Several class activation map generation methods have been proposed recently @cite_12 @cite_21 @cite_0 @cite_16 @cite_10 . Some methods first introduce the local mechanisms such as global max pooling @cite_14 and log-sum-exp pooling @cite_1 to highlight the activation regions. Then, @cite_9 propose CAM method that uses average pooling layer and FC layer to modify the high layers of the classification network. The CAM is then highlighted by averaging the last convolution layer weighted by the weights of the following FC layer. Better CAM can be generated. After that, several methods are proposed to enhance the CAM generation. An important extension is the Grad-CAM @cite_13 that uses the gradient signals to form the weights directly. Hence, the drawbacks of CAM that modifies the initial network is avoided.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_9", "@cite_21", "@cite_1", "@cite_0", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2549418531", "2950275949", "2147347568", "2962858109", "2181199739", "2616247523", "2950328304", "1960777822", "1524680991" ], "abstract": [ "We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are 'important' for predictions -- or visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV, and a full version of this extended abstract.", "In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the gen- eration of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate; it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the- art detection performance.", "In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the generation of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate, it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the-art detection performance.", "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E.", "Several recent approaches showed how the representations learned by Convolutional Neural Networks can be repurposed for novel tasks. Most commonly it has been shown that the activation features of the last fully connected layers (fc7 or fc6) of the network, followed by a linear classifier outperform the state-of-the-art on several recognition challenge datasets. Instead of recognition, this paper focuses on the image retrieval problem and proposes a examines alternative pooling strategies derived for CNN features. The presented scheme uses the features maps from an earlier layer 5 of the CNN architecture, which has been shown to preserve coarse spatial information and is semantically meaningful. We examine several pooling strategies and demonstrate superior performance on the image retrieval task (INRIA Holidays) at the fraction of the computational cost, while using a relatively small memory requirements. In addition to retrieval, we see similar efficiency gains on the SUN397 scene categorization dataset, demonstrating wide applicability of this simple strategy. We also introduce and evaluate a novel GeoPlaces5K dataset from different geographical locations in the world for image retrieval that stresses more dramatic changes in appearance and viewpoint.", "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations.", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets." ] }
1901.07698
2913287814
In manufacturing and automation settings, robots often have to perform highly-repetitive manipulation tasks in structured environments. In this work we are interested in settings where tasks are similar, yet not identical (e.g., due to uncertain orientation of objects) and motion planning needs to be extremely fast. Preprocessing-based approaches prove to be very beneficial in these settings; they analyze the configuration-space offline to generate some auxiliary information which can then be used in the query phase to speedup planning times. Typically, the tighter the requirement is on query times the larger the memory footprint will be. In particular, for high-dimensional spaces, providing real-time planning capabilities is extremely challenging. While there are planners that guarantee real-time performance by limiting the planning horizon, we are not aware of general-purpose planners capable of doing it for infinite horizon (i.e., planning to the goal). To this end, we propose a preprocessing-based method that provides provable bounds on the query time while incurring only a small amount of memory overhead in the query phase. We evaluate our method on a 7-DOF robot arm and show a speedup of over tenfold in query time when compared to the PRM algorithm, while provably guaranteeing a maximum query time of less than 3 milliseconds.
A straightforward approach to efficiently preprocess a known environment is using the PRM algorithm @cite_8 which generates a A roadmap is a graph embedded in the configuration space where vertices correspond to configurations and edges correspond to paths connecting close-by configurations. . Once a a dense roadmap has been pre-computed, any query can be efficiently answered online by connecting the start and goal to the roadmap. Query times can be significantly sped up by further preprocessing the roadmaps using landmarks @cite_23 . Unfortunately, there is no guarantee that a query can be connected to the roadmap as PRM only provides guarantees @cite_25 . Furthermore, this connecting phase requires running a collision-detection algorithm which is typically considered the computational bottleneck in many motion-planning algorithms @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_25", "@cite_23", "@cite_8" ], "mid": [ "2606507580", "2072127296", "2142644931", "1864112212" ], "abstract": [ "A landmark based heuristic is investigated for reducing query phase run-time of the probabilistic roadmap (PRM) motion planning method. The heuristic is generated by storing minimum spanning trees from a small number of vertices within the PRM graph and using these trees to approximate the cost of a shortest path between any two vertices of the graph. The intermediate step of preprocessing the graph increases the time and memory requirements of the classical motion planning technique in exchange for speeding up individual queries making the method advantageous in multi-query applications. This paper investigates these trade-offs on PRM graphs constructed in randomized environments as well as a practical manipulator simulation. We conclude that the method is preferable to Dijkstra's algorithm or the A∗ algorithm with conventional heuristics in multi-query applications.", "Roadmap spanners provide a way to acquire sparse data structures that efficiently answer motion planning queries with probabilistic completeness and asymptotic near-optimality. The current SPARS method provides these properties by building two graphs in parallel: a dense asymptotically-optimal roadmap based on PRM* and its spanner. This paper shows that it is possible to relax the conditions under which a sample is added to the spanner and provide guarantees, while not requiring the use of a dense graph. A key aspect of SPARS is that the probability of adding nodes to the roadmap goes to zero as iterations increase, which is maintained in the proposed extension. The paper describes the new algorithm, argues its theoretical properties and evaluates it against PRM* and the original SPARS algorithm. The experimental results show that the memory requirements of the method upon construction are dramatically reduced, while returning competitive quality paths with PRM*. There is a small sacrifice in the size of the final spanner relative to SPARS but the new method still returns graphs orders of magnitudes smaller than PRM*, leading to very efficient online query resolution.", "This paper presents a randomized motion planner for kinodynamic asteroid avoidance problems, in which a robot must avoid collision with moving obstacles under kinematic, dynamic constraints and reach a specified goal state. Inspired by probabilistic-roadmap (PRM) techniques, the planner samples the state x time space of a robot by picking control inputs at random in order to compute a roadmap that captures the connectivity of the space. However, the planner does not precompute a roadmap as most PRM planners do. Instead, for each planning query, it generates, on the fly, a small roadmap that connects the given initial and goal state. In contrast to PRM planners, the roadmap computed by our algorithm is a directed graph oriented along the time axis of the space. To verify the planner's effectiveness in practice, we tested it both in simulated environments containing many moving obstacles and on a real robot under strict dynamic constraints. The efficiency of the planner makes it possible for a robot to respond to a changing environment without knowing the motion of moving obstacles well in advance.", "In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm FMT*. The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a 'lazy' dynamic programming recursion on a predetermined number of probabilistically drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms chiefly RRT and multiple-query algorithms chiefly PRM, and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds-the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order On −1 d+? , where n is the number of sampled points, d is the dimension of the configuration space, and ? is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our theoretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive." ] }
1901.07698
2913287814
In manufacturing and automation settings, robots often have to perform highly-repetitive manipulation tasks in structured environments. In this work we are interested in settings where tasks are similar, yet not identical (e.g., due to uncertain orientation of objects) and motion planning needs to be extremely fast. Preprocessing-based approaches prove to be very beneficial in these settings; they analyze the configuration-space offline to generate some auxiliary information which can then be used in the query phase to speedup planning times. Typically, the tighter the requirement is on query times the larger the memory footprint will be. In particular, for high-dimensional spaces, providing real-time planning capabilities is extremely challenging. While there are planners that guarantee real-time performance by limiting the planning horizon, we are not aware of general-purpose planners capable of doing it for infinite horizon (i.e., planning to the goal). To this end, we propose a preprocessing-based method that provides provable bounds on the query time while incurring only a small amount of memory overhead in the query phase. We evaluate our method on a 7-DOF robot arm and show a speedup of over tenfold in query time when compared to the PRM algorithm, while provably guaranteeing a maximum query time of less than 3 milliseconds.
Recently,the repetition roadmap @cite_19 was suggested as a way to extend the PRM for the case of multiple highly-similar scenarios. While this approach exhibits significant speedup in computation time, it still suffers from the previously-mentioned shortcomings.
{ "cite_N": [ "@cite_19" ], "mid": [ "2883982988" ], "abstract": [ "We present the Repetition Roadmap, a motion planner that effectively exploits the repetitiveness of a set of tasks with small variations to efficiently compute new motions. The method learns an abstract roadmap of probability distributions for the configuration space of a particular task set from previous solution paths. We show how to construct the Repetition Roadmap by learning a Gaussian mixture model and connecting the distribution components based on the connectivity information of the prior paths. We present an algorithm that exploits the information in the Repetition Roadmap to guide the search for solutions of similar tasks. We illustrate our method in a maze, which explains the construction of the Repetition Roadmap and how the method can generalize over different environments. We show how to apply the Repetition Roadmap to similar constrained manipulation tasks and present our results including significant speedup in computation time when compared to uniform and adaptive sampling." ] }
1901.07698
2913287814
In manufacturing and automation settings, robots often have to perform highly-repetitive manipulation tasks in structured environments. In this work we are interested in settings where tasks are similar, yet not identical (e.g., due to uncertain orientation of objects) and motion planning needs to be extremely fast. Preprocessing-based approaches prove to be very beneficial in these settings; they analyze the configuration-space offline to generate some auxiliary information which can then be used in the query phase to speedup planning times. Typically, the tighter the requirement is on query times the larger the memory footprint will be. In particular, for high-dimensional spaces, providing real-time planning capabilities is extremely challenging. While there are planners that guarantee real-time performance by limiting the planning horizon, we are not aware of general-purpose planners capable of doing it for infinite horizon (i.e., planning to the goal). To this end, we propose a preprocessing-based method that provides provable bounds on the query time while incurring only a small amount of memory overhead in the query phase. We evaluate our method on a 7-DOF robot arm and show a speedup of over tenfold in query time when compared to the PRM algorithm, while provably guaranteeing a maximum query time of less than 3 milliseconds.
An alternative approach to address our problem is to precompute a set of complete paths into a library and given a query, attempt to match complete paths from the library to the new query @cite_20 @cite_14 . Using paths from previous search episodes (also known as using experience) has also been an active line of work @cite_9 @cite_17 @cite_13 @cite_11 . Some of these methods have been integrated with sparse motion-planning roadmaps (see e.g., @cite_5 @cite_16 ) to reduce the memory footprint of the algorithm. Unfortunately, none of the mentioned algorithms provide bounded planning-time guarantees that are required by our applications.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_9", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2570393549", "2293052688", "2050372846", "2508815980", "2132044905", "1971458750", "2110405746", "2099893201" ], "abstract": [ "We propose a humanoid robot navigation planning framework that reuses previous experience to decrease planning time. The framework is intended for navigating complex unstructured environments using both palm and foot contacts. In a complex environment, discrete-search-based contact space planners trade-off between high branching factor and action flexibility. Although approaches such as weighted A∗, ARA∗ and ANA∗ could speed up the search by compromising on optimality, they can be very slow when the heuristic is inaccurate. In the proposed framework, an experience-retrieval module is added in parallel to ANA∗. This module collects previously-generated motion plans and clusters them based on contact pose similarity to form a motion plan library. To retrieve an appropriate plan from the library for a given environment, the framework uses a distance between the contact poses in the plan and environment surfaces. Candidate plans are then modified with local trajectory optimization until a plan fitting the query environment is found. Our experiments show that the proposed framework outperforms planning-from-scratch in success rate in unstructured environments by at least 28 and can navigate difficult environments such as rubble and narrow corridors.", "In this paper, we develop an online motion planning approach which learns from its planning episodes (experiences) a graph, an Experience Graph. On the theoretical side, we show that planning with Experience graphs is complete and provides bounds on suboptimality with respect to the graph that represents the original planning problem. Experimentally, we show in simulations and on a physical robot that our approach is particularly suitable for higher-dimensional motion planning tasks such as planning for two armed mobile manipulation. Many mundane manipulation tasks such as picking and placing various objects in a kitchen are highly repetitive. It is expected that robots should be capable of learning and improving their performance with every execution of these repetitive tasks. This work focuses on learning from experience for motion planning. Our approach relies on a graphsearch method for planning that builds an Experience Graph online to represent the high-level connectivity of the free space used for the encountered planning tasks. The planner uses the Experience graph to accelerate its planning whenever possible and gracefully degenerates to planning from scratch if no previous planning experiences can be reused. Planning with Experience graphs is complete and it provides bounds on suboptimality with respect to the graph that represents the original planning problem. Related work in (Jiang and Kallmann 2007) takes a database of motion plans and uses an RRT to draw the search towards a similar path to the new query. Our approach may use parts of many prior paths (not just one) and provides bounds on solution quality, unlike the above work. We provide results showing Experience Graphs can significantly improve the performance of a high-dimensional full-body planner for the PR2 robot. For more details refer to the full paper ( 2012).", "Reusing APIs of existing libraries is a common practice during software development, but searching suitable APIs and their usages can be time-consuming [6]. In this paper, we study a new and more practical approach to help users find usages of APIs given only simple text phrases, when users have limited knowledge about an API library. We model API invocations as an API graph and aim to find an optimum connected subgraph that meets users' search needs. The problem is challenging since the search space in an API graph is very huge. We start with a greedy subgraph search algorithm which returns a connected subgraph containing nodes with high textual similarity to the query phrases. Two refinement techniques are proposed to improve the quality of the returned subgraph. Furthermore, as the greedy subgraph search algorithm relies on online query of shortest path between two graph nodes, we propose a space-efficient compressed shortest path indexing scheme that can efficiently recover the exact shortest path. We conduct extensive experiments to show that the proposed subgraph search approach for API recommendation is very effective in that it boosts the average F1-measure of the state-of-the-art approach, Portfolio [15], on two groups of real-life queries by 64 and 36 respectively.", "In (2015), we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend (2015) in three ways: (1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. (2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. (3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of (2015). Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel pairwise feature, it eliminates the need for the intermediate tracklet representation of (2015). We demonstrate the effectiveness of our overall approach on the MOT16 benchmark ( 2016), achieving state-of-art performance.", "We demonstrate how Dijkstra's algorithm for shortest path queries can be accelerated by using precomputed shortest path distances. Our approach allows a completely flexible tradeoff between query time and space consumption for precomputed distances. In particular, sublinear space is sufficient to give the search a strong “sense of direction”. We evaluate our approach experimentally using large, real-world road networks.", "We propose a framework, called Lightning, for planning paths in high-dimensional spaces that is able to learn from experience, with the aim of reducing computation time. This framework is intended for manipulation tasks that arise in applications ranging from domestic assistance to robot-assisted surgery. Our framework consists of two main modules, which run in parallel: a planning-from-scratch module, and a module that retrieves and repairs paths stored in a path library. After a path is generated for a new query, a library manager decides whether to store the path based on computation time and the generated path's similarity to the retrieved path. To retrieve an appropriate path from the library we use two heuristics that exploit two key aspects of the problem: (i) A correlation between the amount a path violates constraints and the amount of time needed to repair that path, and (ii) the implicit division of constraints into those that vary across environments in which the robot operates and those that do not. We evaluated an implementation of the framework on several tasks for the PR2 mobile manipulator and a minimally-invasive surgery robot in simulation. We found that the retrieve-and-repair module produced paths faster than planning-from-scratch in over 90 of test cases for the PR2 and in 58 of test cases for the minimally-invasive surgery robot.", "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 .", "Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot." ] }
1901.07698
2913287814
In manufacturing and automation settings, robots often have to perform highly-repetitive manipulation tasks in structured environments. In this work we are interested in settings where tasks are similar, yet not identical (e.g., due to uncertain orientation of objects) and motion planning needs to be extremely fast. Preprocessing-based approaches prove to be very beneficial in these settings; they analyze the configuration-space offline to generate some auxiliary information which can then be used in the query phase to speedup planning times. Typically, the tighter the requirement is on query times the larger the memory footprint will be. In particular, for high-dimensional spaces, providing real-time planning capabilities is extremely challenging. While there are planners that guarantee real-time performance by limiting the planning horizon, we are not aware of general-purpose planners capable of doing it for infinite horizon (i.e., planning to the goal). To this end, we propose a preprocessing-based method that provides provable bounds on the query time while incurring only a small amount of memory overhead in the query phase. We evaluate our method on a 7-DOF robot arm and show a speedup of over tenfold in query time when compared to the PRM algorithm, while provably guaranteeing a maximum query time of less than 3 milliseconds.
Finally, our notion of attractor states is similar to control-based methods that ensure safe operation over local regions of the free configuration space @cite_22 @cite_2 . These regions are then used within a high-level motion planner to compute collision-free paths.
{ "cite_N": [ "@cite_22", "@cite_2" ], "mid": [ "2104906962", "2014776084" ], "abstract": [ "This paper presents an alternative to the Potential Field Method of computing local collision-free motions for general manipulators. The main distinction is that we separate the description of the task from constraints of anti-collision. This enables to control accurately all relevant measures of the problem. A task is expressed by the minimization of a function plus eventually some geometric constraints, whereas anti-collision is translated into very simple linear constraints through the methods of the velocity dampers and the tangent separating planes. This approach is applied to the control of manipulators with a high number of degrees of freedom, using hierarchical descriptions of the environment and the robots. This is illustrated by two realizations, path planning for a ten link manipulator in the cluttered environment of a nuclear plant reactor, and cooperative tasks between two six degree of freedom robots. In the end we discuss how to incorporate the action of a global planner in this general framework.", "We designed a distributed collision-free formation flight control law in the framework of nonlinear model predictive control. Formation configuration is determined in the virtual reference point coordinate system. Obstacle avoidance is guaranteed by cost penalty, and intervehicle collision avoidance is guaranteed by cost penalty combined with a new priority strategy." ] }
1901.07439
2911690298
Recently, Graph Convolutional Networks (GCNs) have been widely studied for graph-structured data representation and learning. However, in many real applications, data are coming with multiple graphs, and it is non-trivial to adapt GCNs to deal with data representation with multiple graph structures. One main challenge for multi-graph representation is how to exploit both structure information of each individual graph and correlation information across multiple graphs simultaneously. In this paper, we propose a novel Multiple Graph Adversarial Learning (MGAL) framework for multi-graph representation and learning. MGAL aims to learn an optimal structure-invariant and consistent representation for multiple graphs in a common subspace via a novel adversarial learning framework, which thus incorporates both structure information of intra-graph and correlation information of inter-graphs simultaneously. Based on MGAL, we then provide a unified network for semi-supervised learning task. Promising experimental results demonstrate the effectiveness of MGAL model.
Our multiple graph adversarial learning model is inspired by Generative Adversarial Network (GAN) @cite_7 , which consists of a generator @math and a discriminator @math . The generator is trained to generate the samples to convince the discriminator while the discriminator aims to discriminate the samples returned by generator. Recently, adversarial learning has been explored in graph representation tasks. @cite_14 propose a graph representation model with GANs (GraphGAN). @cite_12 propose an adversarial network embedding (ANE), which employs the adversarial learning to regularize graph representation. @cite_23 also propose an adversarially regularized graph autoencoder model for graph embedding.
{ "cite_N": [ "@cite_14", "@cite_23", "@cite_12", "@cite_7" ], "mid": [ "2768762802", "2753491454", "2963474063", "2616969219" ], "abstract": [ "We introduce an adversarial learning framework, which we named KBGAN, to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graph datasets typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts used by many previous works, but the majority of negative facts generated in this way can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. The objective of the generator is to generate difficult negative samples that can maximize their likeliness determined by the discriminator, while the discriminator minimizes its training loss. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.", "A Triangle Generative Adversarial Network ( @math -GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. @math -GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.", "A Triangle Generative Adversarial Network ( @math -GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. @math -GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.", "Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach." ] }
1901.07076
2963131478
In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2]. However, there are two problems in the current methods, which hinders the overall performance. Firstly, the widely-used margin loss is sensitive to incorrect correspondences, which are prevalent in the existing local descriptor learning datasets. Second, the L2 distance ignores the fact that the feature vectors have been normalized to unit norm. To tackle these two problems and further boost the performance, we propose a robust angular loss which (1) uses cosine similarity instead of L2 distance to compare descriptors and (2) relies on a robust loss function that gives smaller penalty to triplets with negative relative similarity. The resulting descriptor shows robustness on different datasets, reaching the state-of-the-art result on Brown dataset, as well as demonstrating excellent generalization ability on the Hpatches dataset and a Wide Baseline Stereo dataset.
Recent work on local descriptor designing has gone through a huge change from conventional hand-crafted descriptors to learning-based approaches, which ranges from SIFT @cite_27 and DAISY @cite_16 to latest methods such as DeepCompare, MatchNet, and HardNet @cite_19 @cite_7 @cite_20 @cite_5 . As for deep learning-based descriptors, there are two study trends including CNN structure designing and negative sampling for embedding learning.
{ "cite_N": [ "@cite_7", "@cite_19", "@cite_27", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "1577117850", "2963157250", "2884088147", "2737260104", "2963531306", "2293670510" ], "abstract": [ "In this paper we propose a novel framework for learning local image descriptors in a discriminative manner. For this purpose we explore a siamese architecture of Deep Convolutional Neural Networks (CNN), with a Hinge embedding loss on the L2 distance between descriptors. Since a siamese architecture uses pairs rather than single image patches to train, there exist a large number of positive samples and an exponential number of negative samples. We propose to explore this space with a stochastic sampling of the training set, in combination with an aggressive mining strategy over both the positive and negative samples which we denote as \"fracking\". We perform a thorough evaluation of the architecture hyper-parameters, and demonstrate large performance gains compared to both standard CNN learning strategies, hand-crafted image descriptors like SIFT, and the state-of-the-art on learned descriptors: up to 2.5x vs SIFT and 1.5x vs the state-of-the-art in terms of the area under the curve (AUC) of the Precision-Recall curve.", "We introduce a loss for metric learning, which is inspired by the Lowe's matching criterion for SIFT. We show that the proposed loss, that maximizes the distance between the closest positive and closest negative example in the batch, is better than complex regularization methods; it works well for both shallow and deep convolution network architectures. Applying the novel loss to the L2Net CNN architecture results in a compact descriptor named HardNet. It has the same dimensionality as SIFT (128) and shows state-of-art performance in wide baseline stereo, patch verification and instance retrieval benchmarks.", "Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structure-from-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.", "The research focus of designing local patch descriptors has gradually shifted from handcrafted ones (e.g., SIFT) to learned ones. In this paper, we propose to learn high performance descriptor in Euclidean space via the Convolutional Neural Network (CNN). Our method is distinctive in four aspects: (i) We propose a progressive sampling strategy which enables the network to access billions of training samples in a few epochs. (ii) Derived from the basic concept of local patch matching problem, we empha-size the relative distance between descriptors. (iii) Extra supervision is imposed on the intermediate feature maps. (iv) Compactness of the descriptor is taken into account. The proposed network is named as L2-Net since the output descriptor can be matched in Euclidean space by L2 distance. L2-Net achieves state-of-the-art performance on the Brown datasets [16], Oxford dataset [18] and the newly proposed Hpatches dataset [11]. The good generalization ability shown by experiments indicates that L2-Net can serve as a direct substitution of the existing handcrafted descriptors. The pre-trained L2-Net is publicly available.", "Interest point descriptors have fueled progress on almost every problem in computer vision. Recent advances in deep neural networks have enabled task-specific learned descriptors that outperform hand-crafted descriptors on many problems. We demonstrate that commonly used metric learning approaches do not optimally leverage the feature hierarchies learned in a Convolutional Neural Network (CNN), especially when applied to the task of geometric feature matching. While a metric loss applied to the deepest layer of a CNN, is often expected to yield ideal features irrespective of the task, in fact the growing receptive field as well as striding effects cause shallower features to be better at high precision matching tasks. We leverage this insight together with explicit supervision at multiple levels of the feature hierarchy for better regularization, to learn more effective descriptors in the context of geometric matching tasks. Further, we propose to use activation maps at different layers of a CNN, as an effective and principled replacement for the multi-resolution image pyramids often used for matching tasks. We propose concrete CNN architectures employing these ideas, and evaluate them on multiple datasets for 2D and 3D geometric matching as well as optical flow, demonstrating state-of-the-art results and generalization across datasets.", "In this paper we propose a new approach for learning local descriptors for matching image patches. It has recently been demonstrated that descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Unfortunately their computational complexity is prohibitive for any practical application. We address this problem and propose a CNN based descriptor with improved matching performance, significantly reduced training and execution time, as well as low dimensionality. We propose to train the network with triplets of patches that include a positive and negative pairs. To that end we introduce a new loss function that exploits the relations within the triplets. We compare our approach to recently introduced MatchNet and DeepCompare and demonstrate the advantages of our descriptor in terms of performance, memory footprint and speed i.e. when run in GPU, the extraction time of our 128 dimensional feature is comparable to the fastest available binary descriptors such as BRIEF and ORB." ] }
1901.07076
2963131478
In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2]. However, there are two problems in the current methods, which hinders the overall performance. Firstly, the widely-used margin loss is sensitive to incorrect correspondences, which are prevalent in the existing local descriptor learning datasets. Second, the L2 distance ignores the fact that the feature vectors have been normalized to unit norm. To tackle these two problems and further boost the performance, we propose a robust angular loss which (1) uses cosine similarity instead of L2 distance to compare descriptors and (2) relies on a robust loss function that gives smaller penalty to triplets with negative relative similarity. The resulting descriptor shows robustness on different datasets, reaching the state-of-the-art result on Brown dataset, as well as demonstrating excellent generalization ability on the Hpatches dataset and a Wide Baseline Stereo dataset.
Before CNN models being broadly applied, descriptors learning methods were limited to specific machine learning descriptors. Therefore, there were various kinds of methods inspired by different aspects. Principal Components Analysis (PCA) based SIFT (PCA-SIFT) @cite_25 leads to normalized gradient patch compared to SIFT histograms of gradients. @cite_26 proposed a filter with learned pooling and dimension reduction. @cite_3 studied convex sparse learning to learn pooling fields for descriptors. Aside from these descriptors, @cite_24 raised an online search method from a subset of tests which can increase inter-class variance and decrease intra-class variance. One thing these methods have in common is that they all rely on shallow learning architectures.
{ "cite_N": [ "@cite_24", "@cite_26", "@cite_25", "@cite_3" ], "mid": [ "1577117850", "2798365843", "2756815061", "2248723555" ], "abstract": [ "In this paper we propose a novel framework for learning local image descriptors in a discriminative manner. For this purpose we explore a siamese architecture of Deep Convolutional Neural Networks (CNN), with a Hinge embedding loss on the L2 distance between descriptors. Since a siamese architecture uses pairs rather than single image patches to train, there exist a large number of positive samples and an exponential number of negative samples. We propose to explore this space with a stochastic sampling of the training set, in combination with an aggressive mining strategy over both the positive and negative samples which we denote as \"fracking\". We perform a thorough evaluation of the architecture hyper-parameters, and demonstrate large performance gains compared to both standard CNN learning strategies, hand-crafted image descriptors like SIFT, and the state-of-the-art on learned descriptors: up to 2.5x vs SIFT and 1.5x vs the state-of-the-art in terms of the area under the curve (AUC) of the Precision-Recall curve.", "Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization. Experimental results show that our approach achieves state-of-the-art on three publicly available fine-grained recognition datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and visualizations are provided to understand our approach.", "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100.", "Deep learning methods such as convolutional neural networks (CNNs) can deliver highly accurate classification results when provided with large enough data sets and respective labels. However, using CNNs along with limited labeled data can be problematic, as this leads to extensive overfitting. In this letter, we propose a novel method by considering a pretrained CNN designed for tackling an entirely different classification problem, namely, the ImageNet challenge, and exploit it to extract an initial set of representations. The derived representations are then transferred into a supervised CNN classifier, along with their class labels, effectively training the system. Through this two-stage framework, we successfully deal with the limited-data problem in an end-to-end processing scheme. Comparative results over the UC Merced Land Use benchmark prove that our method significantly outperforms the previously best stated results, improving the overall accuracy from 83.1 up to 92.4 . Apart from statistical improvements, our method introduces a novel feature fusion algorithm that effectively tackles the large data dimensionality by using a simple and computationally efficient approach." ] }
1901.07076
2963131478
In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2]. However, there are two problems in the current methods, which hinders the overall performance. Firstly, the widely-used margin loss is sensitive to incorrect correspondences, which are prevalent in the existing local descriptor learning datasets. Second, the L2 distance ignores the fact that the feature vectors have been normalized to unit norm. To tackle these two problems and further boost the performance, we propose a robust angular loss which (1) uses cosine similarity instead of L2 distance to compare descriptors and (2) relies on a robust loss function that gives smaller penalty to triplets with negative relative similarity. The resulting descriptor shows robustness on different datasets, reaching the state-of-the-art result on Brown dataset, as well as demonstrating excellent generalization ability on the Hpatches dataset and a Wide Baseline Stereo dataset.
In the past few years, models based on CNN try to get better performance by designing various convolutional neural network architectures, e.g. @cite_20 @cite_23 . @cite_20 choose a two-branched network, a typical Siamese structure for feature extraction and three full connected layers for deep metric learning. @cite_23 explored further on Siamese network with two branches sharing no parameters and proposed a two-channel input structure which is stacked by center cropped patches and plain patches.
{ "cite_N": [ "@cite_23", "@cite_20" ], "mid": [ "2747898905", "2606006859" ], "abstract": [ "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.", "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models." ] }
1901.07273
2914658942
We propose a new video representation in terms of an over-segmentation of dense trajectories covering the whole video. Trajectories are often used to encode long-temporal information in several computer vision applications. Similar to temporal superpixels, a temporal slice of super-trajectories are superpixels, but the later contains more information because it maintains the long dense pixel-wise tracking information as well. The main challenge in using trajectories for any application, is the accumulation of tracking error in the trajectory construction. For our problem, this results in disconnected superpixels. We exploit constraints for edges in addition to trajectory based color and position similarity. Analogous to superpixels as a preprocessing tool for images, the proposed representation has its applications for videos, especially in trajectory based video analysis.
Image segmentation into superpixels is a widely studied problem in Computer Vision. Here we only discuss some of prominent works in this area. Normalized cuts algorithm, by Shi and Malik, uses contour and texture cues to recursively partition the image using a pixel graph @cite_14 . Meanshift, proposed by Comaniciu and Meer, is a local mode seeking algorithm on the color and position space to find segment of the image @cite_22 . Quickshift, by Vedaldi and Soatto, is also a mode seeking scheme but more efficient than meanshift @cite_18 . SEEDS, by Van den Bergh al, is a coarse-to-fine method to refine superpixel boundaries through an energy-driven sampling @cite_3 . SLIC, by Achanta al, is an optimized K-means clustering algorithm on color and position features @cite_12 . A more comprehensive list of superpixel clustering algorithms and their evaluation is available in @cite_16 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_3", "@cite_16", "@cite_12" ], "mid": [ "1938929646", "2573546209", "1950112384", "2529970009", "2424468279", "2108424265" ], "abstract": [ "We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. However, instead of using the traditional eigen-based algorithm, we approximate the similarity metric using a kernel function leading to an explicitly mapping of pixel values and coordinates into a high dimensional feature space. We revisit the conclusion that by appropriately weighting each point in this feature space, the objective functions of weighted K-means and normalized cuts share the same optimum point. As such, it is possible to optimize the cost function of normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images. Experimental results show that LSC performs equally well or better than state of the art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation.", "In this paper, we present a superpixel segmentation algorithm called linear spectral clustering (LSC), which is capable of producing superpixels with both high boundary adherence and visual compactness for natural images with low computational costs. In LSC, a normalized cuts-based formulation of image segmentation is adopted using a distance metric that measures both the color similarity and the space proximity between image pixels. However, rather than directly using the traditional eigen-based algorithm, we approximate the similarity metric through a deliberately designed kernel function such that pixel values can be explicitly mapped to a high-dimensional feature space. We then apply the conclusion that by appropriately weighting each point in this feature space, the objective functions of the weighted K-means and the normalized cuts share the same optimum points. Consequently, it is possible to optimize the cost function of the normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC possesses linear computational complexity and high memory efficiency, since it avoids both the decomposition of the affinity matrix and the generation of the large kernel matrix. By utilizing the underlying mathematical equivalence between the two types of seemingly different methods, LSC successfully preserves global image structures through efficient local operations. Experimental results show that LSC performs as well as or even better than the state-of-the-art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation. The applicability of LSC is further demonstrated in two related computer vision tasks.", "In this paper, we tackle the problem of unsupervised segmentation in the form of superpixels. Our main emphasis is on speed and accuracy. We build on [31] to define the problem as a boundary and topology preserving Markov random field. We propose a coarse to fine optimization technique that speeds up inference in terms of the number of updates by an order of magnitude. Our approach is shown to outperform [31] while employing a single iteration. We evaluate and compare our approach to state-of-the-art superpixel algorithms on the BSD and KITTI benchmarks. Our approach significantly outperforms the baselines in the segmentation metrics and achieves the lowest error on the stereo task.", "In this paper, we propose a real-time image superpixel segmentation method with 50 frames s by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. In order to decrease the computational costs of superpixel algorithms, we adopt a fast two-step framework. In the first clustering stage, the DBSCAN algorithm with color-similarity and geometric restrictions is used to rapidly cluster the pixels, and then, small clusters are merged into superpixels by their neighborhood through a distance measurement defined by color and spatial features in the second merging stage. A robust and simple distance function is defined for obtaining better superpixels in these two steps. The experimental results demonstrate that our real-time superpixel algorithm (50 frames s) by the DBSCAN clustering outperforms the state-of-the-art superpixel segmentation methods in terms of both accuracy and efficiency.", "In this paper, we present an efficient page segmentation method for historical document images. Many existing methods either rely on hand-crafted features or perform rather slow as they treat the problem as a pixel-level assignment problem. In order to create a feasible method for real applications, we propose to use superpixels as basic units of segmentation, and features are learned directly from pixels. An image is first oversegmented into superpixels with the simple linear iterative clustering (SLIC) algorithm. Then, each superpixel is represented by the features of its central pixel. The features are learned from pixel intensity values with stacked convolutional autoencoders in an unsupervised manner. A support vector machine (SVM) classifier is used to classify superpixels into four classes: periphery, background, text block, and decoration. Finally, the segmentation results are refined by a connected component based smoothing procedure. Experiments on three public datasets demonstrate that compared to our previous method, the proposed method is much faster and achieves comparable segmentation results. Additionally, much fewer pixels are used for classifier training.", "We present a novel image superpixel segmentation approach using the proposed lazy random walk (LRW) algorithm in this paper. Our method begins with initializing the seed positions and runs the LRW algorithm on the input image to obtain the probabilities of each pixel. Then, the boundaries of initial superpixels are obtained according to the probabilities and the commute time. The initial superpixels are iteratively optimized by the new energy function, which is defined on the commute time and the texture measurement. Our LRW algorithm with self-loops has the merits of segmenting the weak boundaries and complicated texture regions very well by the new global probability maps and the commute time strategy. The performance of superpixel is improved by relocating the center positions of superpixels and dividing the large superpixels into small ones with the proposed optimization algorithm. The experimental results have demonstrated that our method achieves better performance than previous superpixel approaches." ] }
1901.07046
2945411327
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3 accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.
Several studies focused on understanding videos that target young children, and how they interact with such videos and the platform. Buzzi @cite_12 suggests the addition of extra parental controls on YouTube in an attempt to prevent children from accessing inappropriate content. Ara 'u @cite_26 study the audience profiles and comments posted on YouTube videos in popular children-oriented channels, and conclude that children under the age of 13 use YouTube and are exposed to advertising, inappropriate content, and privacy issues. @cite_2 propose a binary classifier, based on video metadata, for identifying suitable YouTube videos for children. @cite_24 focus on the characterization and detection of unsafe content for children and its promoters on YouTube. They propose a machine learning classifier that considers a set of video-, user-, and comment-level features for the detection of users that promote unsafe content.
{ "cite_N": [ "@cite_24", "@cite_26", "@cite_12", "@cite_2" ], "mid": [ "2914279369", "2963592843", "2900026217", "2963491436" ], "abstract": [ "A considerable number of the most-subscribed YouTube channels feature content popular among children of very young age. Hundreds of toddler-oriented channels on YouTube offer inoffensive, well produced, and educational videos. Unfortunately, inappropriate (disturbing) content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrying problem. While there are many anecdotal reports of the scale of the problem, there is no systematic quantitative measurement. Hence, in this work, we develop a classifier able to detect toddler-oriented inappropriate content on YouTube with 82.8 accuracy, and we leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis indicates that YouTube's currently deployed counter-measures are ineffective in terms of detecting disturbing videos in a timely manner. Finally, using our classifier, we assess how prominent the problem is on YouTube, finding that young children are likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.", "YouTube draws large number of users who contribute actively by uploading videos or commenting on existing videos. However, being a crowd sourced and large content pushed onto it, there is limited control over the content. This makes malicious users push content (videos and comments) which is inappropriate (unsafe), particularly when such content is placed around cartoon videos which are typically watched by kids. In this paper, we focus on presence of unsafe content for children and users who promote it. For detection of child unsafe content and its promoters, we perform two approaches, one based on supervised classification which uses an extensive set of video-level, user-level and comment-level features and another based Convolutional Neural Network using video frames. Detection accuracy of 85.7 is achieved which can be leveraged to build a system to provide a safe YouTube experience for kids. Through detailed characterization studies, we are able to successfully conclude that unsafe content promoters are less popular and engage less as compared with other users. Finally, using a network of unsafe content promoters and other users based on their engagements (likes, subscription and playlist addition) and other factors, we find that unsafe content is present very close to safe content and unsafe content promoters form very close knit communities with other users, thereby further increasing the likelihood of a child getting getting exposed to unsafe content.", "Acknowledging the advantages as well as the dangers of the internet content on kids education and entertainment, YouTube Kids was created. Based on regulations for child-friendly programs, several violations are identified and restricted from viewable content. When a child surfs the Internet, the same regulations could be automatically detected and filtered. However, current YouTube Kids content filtering relies on meta-data attributes, where inappropriate content could pass the filtering mechanism. This research, propose an advanced real-time content filtering approach using automated video and audio analysis as an extra layer for kids safety. The proposed method utilizes the thin-slicing theory, where several one second slices are selected randomly from the clip and extracted. The use of a one-second slice will assure a temporal analysis of the clip content, and ensures a real-time content analysis. For each slice, audio is automatically transcribed using automatic speech recognition techniques to be further analysed for its linguistic content. Furthermore, the audio signal is analysed to detect event and scenes (e.g. explosion). The image frames extracted from the slices are also inspected for its content to avoid inappropriate scenes, such as violence. Upon the success of this approach on YouTube Kids application, investigation of its generalizability to other video applications, and other languages could be performed.", "Online video services, messaging systems, games and social media services are tremendously popular among young people and children in many countries. Most of the digital services offered on the internet are advertising funded, which makes advertising ubiquitous in children’s everyday life. To understand the impact of advertising-based digital services on children, we study the collective behavior of users of YouTube for kids channels and present the demographics of a large number of users. We collected data from 12,848 videos from 17 channels in US and UK and 24 channels in Brazil. The channels in English have been viewed more than 37 billion times. We also collected more than 14 million comments made by users. Based on a combination of text-analysis and face recognition tools, we show the presence of racial and gender biases in our large sample of users. We also identify children actively using YouTube, although the minimum age for using the service is 13 years in most countries. We provide comparisons of user behavior among the three countries, which represent large user populations in the global North and the global South." ] }
1901.07046
2945411327
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3 accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.
A large body of previous work focused on the detection of malicious activity on YouTube. @cite_13 use social network analysis techniques to discover hate and extremist YouTube videos, as well as hidden communities in the ecosystem. @cite_22 develop a binary classifier trained with user and video features for detecting YouTube videos that promote hate and extremism. @cite_25 use video, audio, and textual features for training a k-nearest neighbors classifier for detecting YouTube videos containing violence. @cite_18 perform an in-depth analysis on video comments posted by alt-right channels on YouTube. They conclude that the comments of a video are a better indicator for detecting alt-right videos when compared to the video's title. @cite_23 use video features for detecting videos violating privacy or promoting harassment.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_23", "@cite_13", "@cite_25" ], "mid": [ "45052841", "2007072922", "2057916874", "2022303741", "2052549182" ], "abstract": [ "We describe a semi-automated system to assist law enforcement and intelligence agencies dealing with cyber-crime related to promotion of hate and radicalization on the Internet. The focus of this work is on mining YouTube to discover hate videos, users and virtual hidden communities. Finding precise information on YouTube is a challenging task because of the huge size of the YouTube repository and a large subscriber base. We present a solution based on data mining and social network analysis (using a variety of relationships such as friends, subscriptions, favorites and related videos) to aid an analyst in discovering insightful and actionable information. Furthermore, we performed a systematic study of the features and properties of the data and hidden social networks which has implications in understanding extremism on Internet. We take a case study based approach and perform empirical validation of the proposed hypothesis. Our approach succeeded in finding hate videos which were validated manually.", "Online video sharing platforms such as YouTube contains several videos and users promoting hate and extremism. Due to low barrier to publication and anonymity, YouTube is misused as a platform by some users and communities to post negative videos disseminating hatred against a particular religion, country or person. We formulate the problem of identification of such malicious videos as a search problem and present a focused-crawler based approach consisting of various components performing several tasks: search strategy or algorithm, node similarity computation metric, learning from exemplary profiles serving as training data, stopping criterion, node classifier and queue manager. We implement a best-first search algorithm and conduct experiments to measure the accuracy of the proposed approach. Experimental results demonstrate that the proposed approach is effective.", "YouTube is one of the most popular and largest video sharing websites (with social networking features) on the Internet. A significant percentage of videos uploaded on YouTube contains objectionable content and violates YouTube community guidelines. YouTube contains several copyright violated videos, commercial spam, hate and extremism promoting videos, vulgar and pornographic material and privacy invading content. This is primarily due to the low publication barrier and anonymity. We present an approach to identify privacy invading harassment and misdemeanor videos by mining the video metadata. We divide the problem into sub-problems: vulgar video detection, abuse and violence in public places and ragging video detection in school and colleges. We conduct a characterization study on a training dataset by downloading several videos using YouTube API and manually annotating the dataset. We define several discriminatory features for recognizing the target class objects. We employ a one class classifier approach to detect the objectionable video and frame the problem as a recognition problem. Our empirical analysis on test dataset reveals that linguistic features (presence of certain terms and people in the title and description of the main and related videos), popularity based, duration and category of videos can be used to predict the video type. We validate our hypothesis by conducting a series of experiments on evaluation dataset acquired from YouTube. Empirical results reveal that accuracy of proposed approach is more than 80 demonstrating the effectiveness of the approach.", "This paper presents an efficient approach for copies detection in a large videos archive consisting of several hundred of hours. The video content indexing method consists of extracting the dynamic behavior on the local description of interest points and further on the estimation of their trajectories along the video sequence. Analyzing the low-level description obtained allows to highlight trends of behaviors and then to assign a label of behavior to each local descriptor. Such an indexing approach has several interesting properties: it provides a rich, compact and generic description, while labels of behavior provide a high-level description of the video content. Here, we focus on video Content Based Copy Detection (CBCD). Copy detection is problematic as similarity search problem but with prominent differences. To be efficient, it requires a dedicated on-line retrieval method based on a specific voting function. This voting function must be robust to signal transformations and discriminating versus high similarities which are not copies. The method we propose in this paper is a dedicated on-line retrieval method based on a combination of the different dynamic contexts computed during the off-line indexing. A spatio-temporal registration based on the relevant combination of detected labels is then applied. This approach is evaluated using a huge video database of 300 hours with different video tests. The method is compared to a state-of-the art technique in the same conditions. We illustrate that taking labels into account in the specific voting process reduces false alarms significantly and drastically improves the precision.", "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. YouTube has a popular and commonly used feature called as video response which allows users to post a video response to an uploaded or existing video. Some of the popular videos on YouTube receive thousands of video responses. We observe presence of opportunistic users posting unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts) as video responses to existing videos. We present a method of mining YouTube to automatically detect video response spam. We formulate the problem of video response spam detection as a one-class classification problem (a recognition task) and divide the problem into three sub-problems: promotional video recognition, pornographic or dirty video recognition and automated script or botnet uploader recognition. We create a sample dataset of target class videos for each of the three sub-problems and identify contextual features (meta-data based or non-content based features) characterizing the target class. Our empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it within a one-class classification framework to recognize video response spam. We conduct a series of experiments to validate the proposed approach and present evidences to demonstrate the effectiveness of the proposed solution with more than 80 accuracy." ] }
1901.07046
2945411327
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3 accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.
With regard to spam detection, @cite_10 explore video attributes that may enable the detection of spam videos on YouTube. A similar study by Sureka @cite_20 focuses on both user features and comment activity logs to propose formulas rules that can accurately detect spamming YouTube users. Using similar features, @cite_29 characterize and identify fraudulently promoted YouTube videos. @cite_15 use only video features, and propose a one-class classifier approach for detecting spam videos.
{ "cite_N": [ "@cite_29", "@cite_10", "@cite_20", "@cite_15" ], "mid": [ "2052549182", "2097436041", "2128388280", "2028772504" ], "abstract": [ "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. YouTube has a popular and commonly used feature called as video response which allows users to post a video response to an uploaded or existing video. Some of the popular videos on YouTube receive thousands of video responses. We observe presence of opportunistic users posting unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts) as video responses to existing videos. We present a method of mining YouTube to automatically detect video response spam. We formulate the problem of video response spam detection as a one-class classification problem (a recognition task) and divide the problem into three sub-problems: promotional video recognition, pornographic or dirty video recognition and automated script or botnet uploader recognition. We create a sample dataset of target class videos for each of the three sub-problems and identify contextual features (meta-data based or non-content based features) characterizing the target class. Our empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it within a one-class classification framework to recognize video response spam. We conduct a series of experiments to validate the proposed approach and present evidences to demonstrate the effectiveness of the proposed solution with more than 80 accuracy.", "A number of online video social networks, out of which YouTube is the most popular, provides features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, opportunistic users--promoters--may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we go a step further by addressing the issue of detecting video spammers and promoters. Towards that end, we manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimates. Using our test collection, we provide a characterization of social and content attributes that may help distinguish each user class. We also investigate the feasibility of using a state-of-the-art supervised classification algorithm to detect spammers and promoters, and assess its effectiveness in our test collection. We found that our approach is able to correctly identify the majority of the promoters, misclassifying only a small percentage of legitimate users. In contrast, although we are able to detect a significant fraction of spammers, they showed to be much harder to distinguish from legitimate users.", "People now feel more comfortable socializing over the internet through popular social networking and media websites than face to face. Thus, the social media websites are thriving more and more nowadays. Like others YouTube is a vastly popular social media site which is expanding at very fast pace. YouTube depends mostly on user created contents and sharing and spreading. Business entities and public figures are taking advantage of this popularity by creating their own page and shared information among the large number of visitors. However, due to this popularity, YouTube has become more susceptible to different types of unwanted and malicious spammer. Currently, YouTube does not have any way to handle its video spammers. It only considers mass comments or messages to be part of spamming. To increase the popularity of a video, malicious users post video response spam, where the video content is not related to the topic being discussed in the particular video or does not contain the media it is supposed to. In this research, we explore different attributes that could lead to video spammers. We first collect data of YouTube videos and manually classify them as either legitimate videos or spams. We then devise a number of attributes of videos which could potentially be used to detect spams. We apply Microsoft SQL Server Data Mining Tools (SSDT) to provide a heuristic for classifying an arbitrary video as either spam or legitimate. Our result demonstrates that in the long run we could successfully classify videos as spam or legitimate videos for most of the cases.", "A number of online video sharing systems, out of which YouTube is the most popular, provide features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one, aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, content promoters may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we address the issue of detecting video spammers and promoters. Towards that end, we first manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimate users. Using our test collection, we provide a characterization of content, individual, and social attributes that help distinguish each user class. We then investigate the feasibility of using supervised classification algorithms to automatically detect spammers and promoters, and assess their effectiveness in our test collection. While our classification approach succeeds at separating spammers and promoters from legitimate users, the high cost of manually labeling vast amounts of examples compromises its full potential in realistic scenarios. For this reason, we further propose an active learning approach that automatically chooses a set of examples to label, which is likely to provide the highest amount of information, drastically reducing the amount of required training data while maintaining comparable classification effectiveness." ] }
1901.07046
2945411327
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3 accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.
O' @cite_4 use dynamic network analysis methods to identify the nature of different spam campaign strategies. @cite_9 propose two supervised classification algorithms to detect spammers, promoters, and legitimate YouTube users. Also, in an effort to improve the performance of spam filtering on the platform, @cite_14 test numerous approaches and propose a tool, based on Naive Bayes, that filters spam comments on YouTube. Finally, @cite_19 propose a deep learning classifier for identifying videos that use manipulative techniques in order to increase their views (i.e., clickbait videos).
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_14", "@cite_4" ], "mid": [ "2028772504", "2097436041", "2295416969", "2022303741" ], "abstract": [ "A number of online video sharing systems, out of which YouTube is the most popular, provide features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one, aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, content promoters may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we address the issue of detecting video spammers and promoters. Towards that end, we first manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimate users. Using our test collection, we provide a characterization of content, individual, and social attributes that help distinguish each user class. We then investigate the feasibility of using supervised classification algorithms to automatically detect spammers and promoters, and assess their effectiveness in our test collection. While our classification approach succeeds at separating spammers and promoters from legitimate users, the high cost of manually labeling vast amounts of examples compromises its full potential in realistic scenarios. For this reason, we further propose an active learning approach that automatically chooses a set of examples to label, which is likely to provide the highest amount of information, drastically reducing the amount of required training data while maintaining comparable classification effectiveness.", "A number of online video social networks, out of which YouTube is the most popular, provides features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, opportunistic users--promoters--may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we go a step further by addressing the issue of detecting video spammers and promoters. Towards that end, we manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimates. Using our test collection, we provide a characterization of social and content attributes that may help distinguish each user class. We also investigate the feasibility of using a state-of-the-art supervised classification algorithm to detect spammers and promoters, and assess its effectiveness in our test collection. We found that our approach is able to correctly identify the majority of the promoters, misclassifying only a small percentage of legitimate users. In contrast, although we are able to detect a significant fraction of spammers, they showed to be much harder to distinguish from legitimate users.", "The profitability promoted by Google in its brand new video distribution platform YouTube has attracted an increasing number of users. However, such success has also attracted malicious users, which aim to self-promote their videos or disseminate viruses and malwares. Since YouTube offers limited tools for comment moderation, the spam volume is shockingly increasing which lead owners of famous channels to disable the comments section in their videos. Automatic comment spam filtering on YouTube is a challenge even for established classification methods, since the messages are very short and often rife with slangs, symbols and abbreviations. In this work, we have evaluated several top-performance classification techniques for such purpose. The statistical analysis of results indicate that, with 99.9 of confidence level, decision trees, logistic regression, Bernoulli Naive Bayes, random forests, linear and Gaussian SVMs are statistically equivalent. Based on this, we have also offered the TubeSpam -- an accurate online system to filter comments posted on YouTube.", "This paper presents an efficient approach for copies detection in a large videos archive consisting of several hundred of hours. The video content indexing method consists of extracting the dynamic behavior on the local description of interest points and further on the estimation of their trajectories along the video sequence. Analyzing the low-level description obtained allows to highlight trends of behaviors and then to assign a label of behavior to each local descriptor. Such an indexing approach has several interesting properties: it provides a rich, compact and generic description, while labels of behavior provide a high-level description of the video content. Here, we focus on video Content Based Copy Detection (CBCD). Copy detection is problematic as similarity search problem but with prominent differences. To be efficient, it requires a dedicated on-line retrieval method based on a specific voting function. This voting function must be robust to signal transformations and discriminating versus high similarities which are not copies. The method we propose in this paper is a dedicated on-line retrieval method based on a combination of the different dynamic contexts computed during the off-line indexing. A spatio-temporal registration based on the relevant combination of detected labels is then applied. This approach is evaluated using a huge video database of 300 hours with different video tests. The method is compared to a state-of-the art technique in the same conditions. We illustrate that taking labels into account in the specific voting process reduces false alarms significantly and drastically improves the precision." ] }
1901.07334
2913901546
Recurrent neural networks can be difficult to train on long sequence data due to the well-known vanishing gradient problem. Some architectures incorporate methods to reduce RNN state updates, therefore allowing the network to preserve memory over long temporal intervals. To address these problems of convergence, this paper proposes a timing-gated LSTM RNN model, called the Gaussian-gated LSTM (g-LSTM). The time gate controls when a neuron can be updated during training, enabling longer memory persistence and better error-gradient flow. This model captures long-temporal dependencies better than an LSTM and the time gate parameters can be learned even from non-optimal initialization values. Because the time gate limits the updates of the neuron state, the number of computes needed for the network update is also reduced. By adding a computational budget term to the training loss, we can obtain a network which further reduces the number of computes by at least 10x. Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.
There have been a multitude of proposed methods to improve the training of RNNs, especially for long sequences. Apart from incorporating additional gating structures, for example the LSTM and the GRU , more recently various techniques were proposed to further increase the capabilities of recurrent networks to learn on sequences of length over 1000. Proposed initialization techniques such as the orthogonal initialization of kernel matrices , of the biases , and diagonal recurrent kernel matrices ( @cite_2 ) have demonstrated success. @cite_3 propose using truncated backpropagation with an additional auxiliary loss to reconstruct previous events.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1581407678", "2963196092" ], "abstract": [ "Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train. Our interest lies in empirically evaluating the expressiveness and the learnability of LSTMs in the sequence-to-sequence regime by training them to evaluate short computer programs, a domain that has traditionally been seen as too complex for neural networks. We consider a simple class of programs that can be evaluated with a single left-to-right pass using constant memory. Our main result is that LSTMs can learn to map the character-level representations of such programs to their correct outputs. Notably, it was necessary to use curriculum learning, and while conventional curriculum learning proved ineffective, we developed a new variant of curriculum learning that improved our networks' performance in all experimental conditions. The improved curriculum had a dramatic impact on an addition problem, making it possible to train an LSTM to add two 9-digit numbers with 99 accuracy.", "Recurrent Neural Networks (RNNs) with sophisticated units that implement a gating mechanism have emerged as powerful technique for modeling sequential signals such as speech or electroencephalography (EEG). The latter is the focus on this paper. A significant big data resource, known as the TUH EEG Corpus (TUEEG), has recently become available for EEG research, creating a unique opportunity to evaluate these recurrent units on the task of seizure detection. In this study, we compare two types of recurrent units: long short-term memory units (LSTM) and gated recurrent units (GRU). These are evaluated using a state of the art hybrid architecture that integrates Convolutional Neural Networks (CNNs) with RNNs. We also investigate a variety of initialization methods and show that initialization is crucial since poorly initialized networks cannot be trained. Furthermore, we explore regularization of these convolutional gated recurrent networks to address the problem of overfitting. Our experiments revealed that convolutional LSTM networks can achieve significantly better performance than convolutional GRU networks. The convolutional LSTM architecture with proper initialization and regularization delivers 30 sensitivity at 6 false alarms per 24 hours." ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
Deep learning is considered an excellent solution to SLAM problems due to its superb performance in data association tasks. Part of recent studies makes a straight substitution of an end-to-end network for the traditional SLAM system, estimating ego-motion from monocular video @cite_41 @cite_8 @cite_42 or completing visual navigation for robots entirely through neural networks @cite_7 @cite_34 . Such works can hardly catch up with traditional methods in accuracy under test datasets. Nevertheless, since deep learning systems rely too much on training data, the end-to-end system fails from time to time at the face of new environments and situations. That's to say the model may hardly predict correct results when there exists a big difference between training scenes and actual scenes.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_41", "@cite_42", "@cite_34" ], "mid": [ "169439271", "2768755505", "2555820268", "2785512290", "2963906250" ], "abstract": [ "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.", "Visual SLAM (Simultaneous Localization and Mapping) methods typically rely on handcrafted visual features or raw RGB values for establishing correspondences between images. These features, while suitable for sparse mapping, often lead to ambiguous matches at texture-less regions when performing dense reconstruction due to the aperture problem. In this work, we explore the use of learned features for the matching task in dense monocular reconstruction. We propose a novel convolutional neural network (CNN) architecture along with a deeply supervised feature learning scheme for pixel-wise regression of visual descriptors from an image which are best suited for dense monocular SLAM. In particular, our learning scheme minimizes a multi-view matching cost-volume loss with respect to the regressed features at multiple stages within the network, for explicitly learning contextual features that are suitable for dense matching between images captured by a moving monocular camera along the epipolar line. We utilize the learned features from our model for depth estimation inside a real-time dense monocular SLAM framework, where photometric error is replaced by our learned descriptor error. Our evaluation on several challenging indoor scenes demonstrate greatly improved accuracy in dense reconstructions of the well celebrated dense SLAM systems like DTAM, without compromising their real-time performance.", "Deep Learning based techniques have been adopted with precision to solve a lot of standard computer vision problems, some of which are image classification, object detection and segmentation. Despite the widespread success of these approaches, they have not yet been exploited largely for solving the standard perception related problems encountered in autonomous navigation such as Visual Odometry (VO), Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM). This paper analyzes the problem of Monocular Visual Odometry using a Deep Learning-based framework, instead of the regular 'feature detection and tracking' pipeline approaches. Several experiments were performed to understand the influence of a known unknown environment, a conventional trackable feature and pre-trained activations tuned for object classification on the network's ability to accurately estimate the motion trajectory of the camera (or the vehicle). Based on these observations, we propose a Convolutional Neural Network architecture, best suited for estimating the object's pose under known environment conditions, and displays promising results when it comes to inferring the actual scale using just a single camera in real-time.", "We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.", "We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.1" ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
To tackle such problems, some researchers focus on the replacement of only parts of traditional SLAM systems while keeping traditional pipelines unchanged @cite_1 @cite_28 @cite_3 @cite_39 @cite_45 . Such attempts are still in an embryonic stage and do not achieve better results than traditional ones. One of the possible explanation for their limited improvement is that they also rely too much on the priority learned from training data, especially when it comes to predicting depth from monocular images. Thus, they are still subject to the same limitation of end-to-end methods. We believe that the experience-based system is not the best choice for geometric problems.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_3", "@cite_39", "@cite_45" ], "mid": [ "2119493293", "612478963", "2768755505", "169439271", "2101648351" ], "abstract": [ "We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "Visual SLAM (Simultaneous Localization and Mapping) methods typically rely on handcrafted visual features or raw RGB values for establishing correspondences between images. These features, while suitable for sparse mapping, often lead to ambiguous matches at texture-less regions when performing dense reconstruction due to the aperture problem. In this work, we explore the use of learned features for the matching task in dense monocular reconstruction. We propose a novel convolutional neural network (CNN) architecture along with a deeply supervised feature learning scheme for pixel-wise regression of visual descriptors from an image which are best suited for dense monocular SLAM. In particular, our learning scheme minimizes a multi-view matching cost-volume loss with respect to the regressed features at multiple stages within the network, for explicitly learning contextual features that are suitable for dense matching between images captured by a moving monocular camera along the epipolar line. We utilize the learned features from our model for depth estimation inside a real-time dense monocular SLAM framework, where photometric error is replaced by our learned descriptor error. Our evaluation on several challenging indoor scenes demonstrate greatly improved accuracy in dense reconstructions of the well celebrated dense SLAM systems like DTAM, without compromising their real-time performance.", "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.", "In this paper, we describe a system that can carry out simultaneous localization and mapping (SLAM) in large indoor and outdoor environments using a stereo pair moving with 6 DOF as the only sensor. Unlike current visual SLAM systems that use either bearing-only monocular information or 3-D stereo information, our system accommodates both monocular and stereo. Textured point features are extracted from the images and stored as 3-D points if seen in both images with sufficient disparity, or stored as inverse depth points otherwise. This allows the system to map both near and far features: the first provide distance and orientation, and the second provide orientation information. Unlike other vision-only SLAM systems, stereo does not suffer from ldquoscale driftrdquo because of unobservability problems, and thus, no other information such as gyroscopes or accelerometers is required in our system. Our SLAM algorithm generates sequences of conditionally independent local maps that can share information related to the camera motion and common features being tracked. The system computes the full map using the novel conditionally independent divide and conquer algorithm, which allows constant time operation most of the time, with linear time updates to compute the full map. To demonstrate the robustness and scalability of our system, we show experimental results in indoor and outdoor urban environments of 210 m and 140 m loop trajectories, with the stereo camera being carried in hand by a person walking at normal walking speeds of 4--5 km h." ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
A simple but effective method is to directly improve the module that limits the performance of traditional SLAM, i.e., stereo matching between frames. Some of them calculate similarity confidence of local features @cite_44 @cite_48 @cite_10 , resulting in the inability to use traditional matching strategy, such as Euclidean distance, cosine distance and so on. SuperPoint @cite_16 trains an end-to-end network to extract both local feature detectors and descriptors from raw images through one forward calculation. However, the efficiency of SuperPoint remains not verified as it only gives out the result on synthetic and virtual datasets and has not been integrated into a real SLAM system for evaluation.
{ "cite_N": [ "@cite_44", "@cite_48", "@cite_16", "@cite_10" ], "mid": [ "2101648351", "2218842719", "2025199671", "2535121179" ], "abstract": [ "In this paper, we describe a system that can carry out simultaneous localization and mapping (SLAM) in large indoor and outdoor environments using a stereo pair moving with 6 DOF as the only sensor. Unlike current visual SLAM systems that use either bearing-only monocular information or 3-D stereo information, our system accommodates both monocular and stereo. Textured point features are extracted from the images and stored as 3-D points if seen in both images with sufficient disparity, or stored as inverse depth points otherwise. This allows the system to map both near and far features: the first provide distance and orientation, and the second provide orientation information. Unlike other vision-only SLAM systems, stereo does not suffer from ldquoscale driftrdquo because of unobservability problems, and thus, no other information such as gyroscopes or accelerometers is required in our system. Our SLAM algorithm generates sequences of conditionally independent local maps that can share information related to the camera motion and common features being tracked. The system computes the full map using the novel conditionally independent divide and conquer algorithm, which allows constant time operation most of the time, with linear time updates to compute the full map. To demonstrate the robustness and scalability of our system, we show experimental results in indoor and outdoor urban environments of 210 m and 140 m loop trajectories, with the stereo camera being carried in hand by a person walking at normal walking speeds of 4--5 km h.", "We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. In contrast to sparse interest-point based methods, our approach aligns images directly based on the photoconsistency of all high-contrast pixels, including corners, edges and high texture areas. It concurrently estimates the depth at these pixels from two types of stereo cues: Static stereo through the fixed-baseline stereo camera setup as well as temporal multi-view stereo exploiting the camera motion. By incorporating both disparity sources, our algorithm can even estimate depth of pixels that are under-constrained when only using fixed-baseline stereo. Using a fixed baseline, on the other hand, avoids scale-drift that typically occurs in pure monocular SLAM.We furthermore propose a robust approach to enforce illumination invariance, capable of handling aggressive brightness changes between frames - greatly improving the performance in realistic settings. In experiments, we demonstrate state-of-the-art results on stereo SLAM benchmarks such as Kitti or challenging datasets from the EuRoC Challenge 3 for micro aerial vehicles.", "We propose a method in this paper to perform sequence-based appearance SLAM in an efficient and effective way. Sequence-based SLAM (or SeqSLAM for short) makes use of the image descriptors extracted from a series of consecutive frames and matching is done between two such image sequences. It has been shown to be effective in dealing with significant illumination change where localization and mapping can be conducted under different time periods and weather conditions. To address the computational issue that can arise from the exhaustive search of the candidate sequences with the increase of map size, we use a particle filter to implement the Bayes filtering framework of estimating the true match. The resampling of the particles allows us to maintain only a small number of hypotheses while still capturing the true distribution of the robot location. Our method is highly scalable and efficient, validated on a large dataset with comparable results to the original algorithm in terms of performance.", "The paper presents a method aiming at improving the reliability of Simultaneous Localization And Mapping (SLAM) approaches based on vision systems. Classical SLAM approaches treat camera capturing time as negligible, and the recorded frames as sharp and well-defined, but this hypothesis does not hold true when the camera is moving too fast. In such cases, in fact, frames may be severely degraded by motion blur, making features matching task a difficult operation. The method here presented is based on a novel approach that combines the benefits of a fully probabilistic SLAM algorithm with the basic ideas behind modern motion blur handling algorithms. Whereby the Kalman Filter, the new approach predicts the best possible blur Point Spread Function (PSF) for each feature and performs matching using also this information." ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
Parallel with the long history of SLAM, considerable attempts have been made on local features. Based on classical hand-craft local features like SIFT @cite_26 , SURF @cite_14 , ORB @cite_37 , early combination of low-level machine learning and local feature descriptors produce PCA-SIFT @cite_24 , ASD @cite_17 , BOLD @cite_0 , Binboost @cite_36 , RFD @cite_32 , RMGD @cite_18 , GRIEF @cite_20 etc. Some of these attempts dedicate on dimensionality reduction and utilize various methods to map high-dimensional descriptors to low-dimensional space. Thus they lose a great amount of information on the raw image. Others make use of binary features. Part of them enhance a traditional feature on specific environments to fit special requirements @cite_20 and is lack of mobility. Most of these studies put forward a new kind of feature without further tests or applications.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_26", "@cite_18", "@cite_36", "@cite_32", "@cite_24", "@cite_0", "@cite_20", "@cite_17" ], "mid": [ "2768755505", "1893620550", "2739423245", "645436802", "2762085884", "2526145926", "2562722635", "2784112303", "2050850512", "2767841593" ], "abstract": [ "Visual SLAM (Simultaneous Localization and Mapping) methods typically rely on handcrafted visual features or raw RGB values for establishing correspondences between images. These features, while suitable for sparse mapping, often lead to ambiguous matches at texture-less regions when performing dense reconstruction due to the aperture problem. In this work, we explore the use of learned features for the matching task in dense monocular reconstruction. We propose a novel convolutional neural network (CNN) architecture along with a deeply supervised feature learning scheme for pixel-wise regression of visual descriptors from an image which are best suited for dense monocular SLAM. In particular, our learning scheme minimizes a multi-view matching cost-volume loss with respect to the regressed features at multiple stages within the network, for explicitly learning contextual features that are suitable for dense matching between images captured by a moving monocular camera along the epipolar line. We utilize the learned features from our model for depth estimation inside a real-time dense monocular SLAM framework, where photometric error is replaced by our learned descriptor error. Our evaluation on several challenging indoor scenes demonstrate greatly improved accuracy in dense reconstructions of the well celebrated dense SLAM systems like DTAM, without compromising their real-time performance.", "Local binary descriptors are attracting increasingly attention due to their great advantages in computational speed, which are able to achieve real-time performance in numerous image vision applications. Various methods have been proposed to learn data-dependent binary descriptors. However, most existing binary descriptors aim overly at computational simplicity at the expense of significant information loss which causes ambiguity in similarity measure using Hamming distance. In this paper, by considering multiple features might share complementary information, we present a novel local binary descriptor, referred as ring-based multi-grouped descriptor (RMGD), to successfully bridge the performance gap between current binary and floated-point descriptors. Our contributions are twofold. First, we introduce a new pooling configuration based on spatial ring-region sampling, allowing for involving binary tests on the full set of pairwise regions with different shapes, scales, and distances. This leads to a more meaningful description than the existing methods which normally apply a limited set of pooling configurations. Then, an extended Adaboost is proposed for an efficient bit selection by emphasizing high variance and low correlation, achieving a highly compact representation. Second, the RMGD is computed from multiple image properties where binary strings are extracted. We cast multi-grouped features integration as rankSVM or sparse support vector machine learning problem, so that different features can compensate strongly for each other, which is the key to discriminativeness and robustness. The performance of the RMGD was evaluated on a number of publicly available benchmarks, where the RMGD outperforms the state-of-the-art binary descriptors significantly.", "Traditional approaches to simultaneous localization and mapping (SLAM) rely on low-level geometric features such as points, lines, and planes. They are unable to assign semantic labels to landmarks observed in the environment. Furthermore, loop closure recognition based on low-level features is often viewpoint-dependent and subject to failure in ambiguous or repetitive environments. On the other hand, object recognition methods can infer landmark classes and scales, resulting in a small set of easily recognizable landmarks, ideal for view-independent unambiguous loop closure. In a map with several objects of the same class, however, a crucial data association problem exists. While data association and recognition are discrete problems usually solved using discrete inference, classical SLAM is a continuous optimization over metric information. In this paper, we formulate an optimization problem over sensor states and semantic landmark positions that integrates metric information, semantic information, and data associations, and decompose it into two interconnected problems: an estimation of discrete data association and landmark class probabilities, and a continuous optimization over the metric states. The estimated landmark and robot poses affect the association and class distributions, which in turn affect the robot-landmark pose optimization. The performance of our algorithm is demonstrated on indoor and outdoor datasets.", "We present a heterogeneous landmark-based visual navigation approach for a monocular mobile robot. We utilize heterogeneous visual features, such as points, line segments, lines, planes, and vanishing points, and their inner geometric constraints managed by a novel multilayer feature graph (MFG). Our method extends the local bundle adjustment-based visual simultaneous localization and mapping (SLAM) framework by explicitly exploiting the heterogeneous features and their inner geometric relationships in an unsupervised manner. As the result, our heterogeneous landmark-based visual navigation algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames, and refines robot localization and MFG landmarks through the process. We present pseudocode for the algorithm and analyze its complexity. We have evaluated our method and compared it with state-of-the-art point landmark-based visual SLAM methods using multiple indoor and outdoor datasets. In particular, on the KITTI dataset, our method reduces the translational error by 52.5 under urban sequences where rectilinear structures dominate the scene.", "Sufficient training examples are the fundamental requirement for most of the learning tasks. However, collecting well-labelled training examples is costly. Inspired by Zero-shot Learning (ZSL) that can make use of visual attributes or natural language semantics as an intermediate level clue to associate low-level features with high-level classes, in a novel extension of this idea, we aim to synthesise training data for novel classes using only semantic attributes. Despite the simplicity of this idea, there are several challenges. First, how to prevent the synthesised data from over-fitting to training classes? Second, how to guarantee the synthesised data is discriminative for ZSL tasks? Third, we observe that only a few dimensions of the learnt features gain high variances whereas most of the remaining dimensions are not informative. Thus, the question is how to make the concentrated information diffuse to most of the dimensions of synthesised data. To address the above issues, we propose a novel embedding algorithm named Unseen Visual Data Synthesis (UVDS) that projects semantic features to the high-dimensional visual feature space. Two main techniques are introduced in our proposed algorithm. (1) We introduce a latent embedding space which aims to reconcile the structural difference between the visual and semantic spaces, meanwhile preserve the local structure. (2) We propose a novel Diffusion Regularisation (DR) that explicitly forces the variances to diffuse over most dimensions of the synthesised data. By an orthogonal rotation (more precisely, an orthogonal transformation), DR can remove the redundant correlated attributes and further alleviate the over-fitting problem. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data for zero-shot learning. Extensive experimental results suggest that our proposed approach significantly outperforms the state-of-the-art methods.", "Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art.", "An increasing number of simultaneous localization and mapping (SLAM) systems are using appearance-based localization to improve the quality of pose estimates. However, with the growing time-spans and size of the areas we want to cover, appearance-based maps are often becoming too large to handle and are consisting of features that are not always reliable for localization purposes. This paper presents a method for selecting map features that are persistent over time and thus suited for long-term localization. Our methodology relies on a CNN classifier based on image patches and depth maps for recognizing which features are suitable for life-long matchability. Thus, the classifier not only considers the appearance of a feature but also takes into account its expected lifetime. As a result, our feature selection approach produces more compact maps with a high fraction of temporally-stable features compared to the current state-of-the-art, while rejecting unstable features that typically harm localization. Our approach is validated on indoor and outdoor datasets, that span over a period of several months.", "We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10–40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT @math ).", "This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.", "We propose a visual SLAM (Simultaneous Localization And Mapping) system able to perform robustly in populated environments. The image stream from a moving RGB-D camera is the only input to the system. The computed map in real-time is composed of two layers: 1) The unpopulated geometrical layer, which describes the geometry of the bare scene as an occupancy grid where pieces of information corresponding to people have been removed; 2) A semantic human activity layer, which describes the trajectory of each person with respect to the unpopulated map, labelling an area as \"traversable\" or \"occupied\". Our proposal is to embed a real-time human tracker into the system. The purpose is twofold. First, to mask out of the rigid SLAM pipeline the image regions occupied by people, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map in populated scenes. Secondly, to estimate the full trajectory of each detected person with respect to the scene map, irrespective of the location of the moving camera when the person was imaged. The proposal is tested with two popular visual SLAM systems, C2TAM and ORBSLAM2, proving its generality. The experiments process a benchmark of RGB-D sequences from camera onboard a mobile robot. They prove the robustness, accuracy and reuse capabilities of the two layer map for populated scenes." ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
Thanks to the booming of Deep Learning, researchers have gone further. End-to-end networks consisting of multiple independent components @cite_31 @cite_16 @cite_38 @cite_30 can not only give out local feature descriptors through one forward computation but also extract local feature detectors.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_31", "@cite_16" ], "mid": [ "2462457117", "2785325870", "2560096627", "2963542991" ], "abstract": [ "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40 vs the best previous precision of 74.00 for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14 accuracy on CUB-2011.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Deep convolution neural networks (CNNs) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multilabel image classification, which requires annotating objects, attributes, scene categories, etc., in a single shot. Recent state-of-the-art approaches to the multilabel image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a regional latent semantic dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly dependent labels. The localized regions are further sent to the recurrent neural networks to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurring in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding-box coordinates during training, and the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.", "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat." ] }
1901.07223
2912693033
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
Focusing only on descriptors, most researchers adopt multi-branch CNN-based architectures like Siamese and triplet networks. Multi-branch networks were first proposed to verify whether the handwritten signatures were consistent in 1994 @cite_27 . Experiments related to similarity measurements further confirm the superiority of this multi-branch structure. As a result, Siamese and triplet networks turn out to be the main architectures employed in local feature descriptor tasks. MatchNet @cite_40 and DeepCompare @cite_6 are typical Siamese networks. Each branch consists of a feature network and a metric network which determines the similarity between two descriptors. Thus the final output is similarity confidence. Together with the metric learning layer, @cite_46 uses triplet structure and achieves better performance. These achievements reveal the potential of triplet neural network. However, these models prove to be not suitable for traditional nearest neighbor search. Therefore, studies that directly output local feature descriptors are derived.
{ "cite_N": [ "@cite_46", "@cite_40", "@cite_27", "@cite_6" ], "mid": [ "2219193941", "2787941778", "2963854930", "1577117850" ], "abstract": [ "Recent innovations in training deep convolutional neural network (ConvNet) models have motivated the design of new methods to automatically learn local image descriptors. The latest deep ConvNets proposed for this task consist of a siamese network that is trained by penalising misclassification of pairs of local image patches. Current results from machine learning show that replacing this siamese by a triplet network can improve the classification accuracy in several problems, but this has yet to be demonstrated for local image descriptor learning. Moreover, current siamese and triplet networks have been trained with stochastic gradient descent that computes the gradient from individual pairs or triplets of local image patches, which can make them prone to overfitting. In this paper, we first propose the use of triplet networks for the problem of local image descriptor learning. Furthermore, we also propose the use of a global loss that minimises the overall classification error in the training set, which can improve the generalisation capability of the model. Using the UBC benchmark dataset for comparing local image descriptors, we show that the triplet network produces a more accurate embedding than the siamese network in terms of the UBC dataset errors. Moreover, we also demonstrate that a combination of the triplet and global losses produces the best embedding in the field, using this triplet network. Finally, we also show that the use of the central-surround siamese network trained with the global loss produces the best result of the field on the UBC dataset. Pre-trained models are available online at this https URL", "Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similarity-learning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC SiamFC allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks.", "Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similaritylearning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC [3] allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks.", "In this paper we propose a novel framework for learning local image descriptors in a discriminative manner. For this purpose we explore a siamese architecture of Deep Convolutional Neural Networks (CNN), with a Hinge embedding loss on the L2 distance between descriptors. Since a siamese architecture uses pairs rather than single image patches to train, there exist a large number of positive samples and an exponential number of negative samples. We propose to explore this space with a stochastic sampling of the training set, in combination with an aggressive mining strategy over both the positive and negative samples which we denote as \"fracking\". We perform a thorough evaluation of the architecture hyper-parameters, and demonstrate large performance gains compared to both standard CNN learning strategies, hand-crafted image descriptors like SIFT, and the state-of-the-art on learned descriptors: up to 2.5x vs SIFT and 1.5x vs the state-of-the-art in terms of the area under the curve (AUC) of the Precision-Recall curve." ] }
1901.06778
2912495464
Head pose estimation, which computes the intrinsic Euler angles (yaw, pitch, roll) from a target human head, is crucial for gaze estimation, face alignment and 3D reconstruction. Traditional approaches to head pose estimation heavily relies on the accuracy of facial landmarks, and solve the correspondence problem between 2D facial landmarks and a mean 3D head model (ad-hoc fitting procedures), which seriously limited their performance, especially when the visibility of face is not in good condition. But existed landmark-free methods either treat head pose estimation as a sub-problem, or bring extra error during problem reduction. Therefore, in this paper, we present our efficient hybrid coarse-fine classification to deal with issues above. First of all, we extend previous work with stricter fine classification by increasing class number. Then, we introduce our hybrid coarse-fine classification scheme into the network. Integrate regression is adopted to get the final prediction. Our proposed approach to head pose estimation is evaluated on three challenging benchmarks, we achieve the state-of-the-art on AFLW2000 and BIWI, and our approach closes the gap with state-of-the-art on AFLW.
Another recent work done by @cite_16 achieves great performance on public datasets. They propose to use a higher level representation to regress the head pose while using deep learning architectures. They use the uncertainty maps in the form of 2D soft localization heatmap images over selected 5 facial landmarks, and pass them through an convolutional neural network as input channels to regress the head pose. However, this approach still cannot avoid problem of landmark invisibility even though they use coarse location, especially when considering that their method only involves with five landmarks which make their method very fragile to invisible condition.
{ "cite_N": [ "@cite_16" ], "mid": [ "2902228397" ], "abstract": [ "Monocular head pose estimation requires learning a model that computes the intrinsic Euler angles for pose (yaw, pitch, roll) from an input image of human face. Annotating ground truth head pose angles for images in the wild is difficult and requires ad-hoc fitting procedures (which provides only coarse and approximate annotations). This highlights the need for approaches which can train on data captured in controlled environment and generalize on the images in the wild (with varying appearance and illumination of the face). Most present day deep learning approaches which learn a regression function directly on the input images fail to do so. To this end, we propose to use a higher level representation to regress the head pose while using deep learning architectures. More specifically, we use the uncertainty maps in the form of 2D soft localization heatmap images over five facial keypoints, namely left ear, right ear, left eye, right eye and nose, and pass them through an convolutional neural network to regress the head-pose. We show head pose estimation results on two challenging benchmarks BIWI and AFLW and our approach surpasses the state of the art on both the datasets." ] }
1901.06778
2912495464
Head pose estimation, which computes the intrinsic Euler angles (yaw, pitch, roll) from a target human head, is crucial for gaze estimation, face alignment and 3D reconstruction. Traditional approaches to head pose estimation heavily relies on the accuracy of facial landmarks, and solve the correspondence problem between 2D facial landmarks and a mean 3D head model (ad-hoc fitting procedures), which seriously limited their performance, especially when the visibility of face is not in good condition. But existed landmark-free methods either treat head pose estimation as a sub-problem, or bring extra error during problem reduction. Therefore, in this paper, we present our efficient hybrid coarse-fine classification to deal with issues above. First of all, we extend previous work with stricter fine classification by increasing class number. Then, we introduce our hybrid coarse-fine classification scheme into the network. Integrate regression is adopted to get the final prediction. Our proposed approach to head pose estimation is evaluated on three challenging benchmarks, we achieve the state-of-the-art on AFLW2000 and BIWI, and our approach closes the gap with state-of-the-art on AFLW.
Although recent state-of-the-art landmark-based method has better prediction given the ground truth of landmark, they suffer from landmark invisibility and the accuracy of landmark under real scene. Robust landmark-free method introduces extra error which limits its performance. In our work, we follow the landmark-free scheme and propose hybrid coarse-fine classification scheme which intends to solve the problem of extra error introduced by coarse classification in @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2753578462" ], "abstract": [ "We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5 of labeled images we outperform previous state-of-the-art trained on the AFLW dataset." ] }
1901.06904
2915067382
Abstract Sound analysis research has mainly been focused on speech and music processing. The deployed methodologies are not suitable for analysis of sounds with varying background noise, in many cases with very low signal-to-noise ratio (SNR). In this paper, we present a method for the detection of patterns of interest in audio signals. We propose novel trainable feature extractors, which we call COPE (Combination of Peaks of Energy). The structure of a COPE feature extractor is determined using a single prototype sound pattern in an automatic configuration process, which is a type of representation learning. We construct a set of COPE feature extractors, configured on a number of training patterns. Then we take their responses to build feature vectors that we use in combination with a classifier to detect and classify patterns of interest in audio signals. We carried out experiments on four public data sets: MIVIA audio events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that we achieved (recognition rate equal to 91.71 on the MIVIA audio events, 94 on the MIVIA road events, 81.25 on the ESC-10 and 94.27 on the TU Dortmund) demonstrate the effectiveness of the proposed method and are higher than the ones obtained by other existing approaches. The COPE feature extractors have high robustness to variations of SNR. Real-time performance is achieved even when the value of a large number of features is computed.
Evaluation of algorithms for audio event detection on public benchmark data sets is a valuable tool for objective comparison of performance. The great attention that was dedicated to music and speech analysis determined the publication of several data sets used in scientific challenges for benchmarking of algorithms. The MIREX challenge series evaluated systems for music information retrieval (MIR) @cite_30 . The CHiME challenge focused on speech analysis in noisy environments @cite_16 . The Acoustic event detection and classification'' task of the CLEAR challenges (2006 and 2007) focused on the detection of sound events related to seminars, such as speech, chair moving, door opening and applause @cite_79 . Recently, the DCASE challenge @cite_25 stimulated the interest of researchers on audio processing for the analysis of environmental sounds. The attention was driven towards audio event detection and classification and scene classification.
{ "cite_N": [ "@cite_30", "@cite_79", "@cite_16", "@cite_25" ], "mid": [ "2086384421", "2775505379", "2743672024", "2759171953" ], "abstract": [ "For intelligent systems to make best use of the audio modality, it is important that they can recognize not just speech and music, which have been researched as specific tasks, but also general sounds in everyday environments. To stimulate research in this field we conducted a public research challenge: the IEEE Audio and Acoustic Signal Processing Technical Committee challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). In this paper, we report on the state of the art in automatically classifying audio scenes, and automatically detecting and classifying audio events. We survey prior work as well as the state of the art represented by the submissions to the challenge from various research groups. We also provide detail on the organization of the challenge, so that our experience as challenge hosts may be useful to those organizing challenges in similar domains. We created new audio datasets and baseline systems for the challenge; these, as well as some submitted systems, are publicly available under open licenses, to serve as benchmarks for further research in general-purpose machine listening.", "DCASE 2017 Challenge consists of four tasks: acoustic scene classification , detection of rare sound events, sound event detection in real-life audio, and large-scale weakly supervised sound event detection for smart cars. This paper presents the setup of these tasks: task definition, dataset, experimental setup, and baseline system results on the development dataset. The baseline systems for all tasks rely on the same implementation using multilayer perceptron and log mel-energies, but differ in the structure of the output layer and the decision making process, as well as the evaluation of system output using task specific metrics.", "This report presents our audio event detection system submitted for Task 2, \"Detection of rare sound events\", of DCASE 2017 challenge. The proposed system is based on convolutional neural networks (CNNs) and deep neural networks (DNNs) coupled with novel weighted and multi-task loss functions and state-of-the-art phase-aware signal enhancement. The loss functions are tailored for audio event detection in audio streams. The weighted loss is designed to tackle the common issue of imbalanced data in background foreground classification while the multi-task loss enables the networks to simultaneously model the class distribution and the temporal structures of the target events for recognition. Our proposed systems significantly outperform the challenge baseline, improving F-score from 72.7 to 90.0 and reducing detection error rate from 0.53 to 0.18 on average on the development data. On the evaluation data, our submission obtains an average F1-score of 88.3 and an error rate of 0.22 which are significantly better than those obtained by the DCASE baseline (i.e. an F1-score of 64.1 and an error rate of 0.64).", "We introduce a dataset for facilitating audio-visual analysis of music performances. The dataset comprises 44 simple multi-instrument classical music pieces assembled from coordinated but separately recorded performances of individual tracks. For each piece, we provide the musical score in MIDI format, the audio recordings of the individual tracks, the audio and video recording of the assembled mixture, and ground-truth annotation files including frame-level and note-level transcriptions. We describe our methodology for the creation of the dataset, particularly highlighting our approaches to address the challenges involved in maintaining synchronization and expressiveness. We demonstrate the high quality of synchronization achieved with our proposed approach by comparing the dataset with existing widely used music audio datasets. We anticipate that the dataset will be useful for the development and evaluation of existing music information retrieval (MIR) tasks, as well as for novel multimodal tasks. We benchmark two existing MIR tasks (multipitch analysis and score-informed source separation) on the dataset and compare them with other existing music audio datasets. In addition, we consider two novel multimodal MIR tasks (visually informed multipitch analysis and polyphonic vibrato analysis) enabled by the dataset and provide evaluation measurements and baseline systems for future comparisons (from our recent work). Finally, we propose several emerging research directions that the dataset enables." ] }
1901.06827
2912004605
Loss functions with a large number of saddle points are one of the main obstacles to training many modern machine learning models. Gradient descent (GD) is a fundamental algorithm for machine learning and converges to a saddle point for certain initial data. We call the region formed by these initial values the "attraction region." For quadratic functions, GD converges to a saddle point if the initial data is in a subspace of up to n-1 dimensions. In this paper, we prove that a small modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher, et al, arXiv:1806.06317] contributes to avoiding saddle points without sacrificing the convergence rate of GD. In particular, we show that the dimension of the LSGD's attraction region is at most floor((n-1) 2) for a class of quadratic functions which is significantly smaller than GD's (n-1)-dimensional attraction region.
@cite_24 showed that GD converges to a local minimizer almost surely if the initial point is randomly chosen. The proof is based on the stable manifold theorem and hence no upper bound of the number of steps is provided. How to escape from saddle points efficiently has been a core problem in non-convex optimization. There has been much recent work on developing algorithms which provably avoid saddle points. The existing algorithms use either the perturbed noisy gradient or partially or fully the Hessian information.
{ "cite_N": [ "@cite_24" ], "mid": [ "2963416883" ], "abstract": [ "Although gradient descent (GD) almost always escapes saddle points asymptotically [, 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [, 2015, , 2017] is not slowed down by saddle points—it can find an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justifies the importance of adding perturbations for efficient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical findings." ] }
1901.06827
2912004605
Loss functions with a large number of saddle points are one of the main obstacles to training many modern machine learning models. Gradient descent (GD) is a fundamental algorithm for machine learning and converges to a saddle point for certain initial data. We call the region formed by these initial values the "attraction region." For quadratic functions, GD converges to a saddle point if the initial data is in a subspace of up to n-1 dimensions. In this paper, we prove that a small modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher, et al, arXiv:1806.06317] contributes to avoiding saddle points without sacrificing the convergence rate of GD. In particular, we show that the dimension of the LSGD's attraction region is at most floor((n-1) 2) for a class of quadratic functions which is significantly smaller than GD's (n-1)-dimensional attraction region.
The Hessian is utilized to distinguish saddle point from local minima. By using the Hessian, Nesterov & Polyak @cite_20 designed a cubic regularization algorithm which converges to an @math -second-order stationary point in @math iterations; @cite_25 developed a class of trust region algorithms and achieved the same complexity. By using only the product of Hessian with any vector, @cite_3 and Carmon & Duchi @cite_2 show that it is also possible to find an @math -approximate local minimum.
{ "cite_N": [ "@cite_3", "@cite_25", "@cite_20", "@cite_2" ], "mid": [ "2962876518", "2747280560", "2156005216", "2767752488" ], "abstract": [ "This paper proposes a stochastic variant of a classic algorithm---the cubic-regularized Newton method [Nesterov and Polyak]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only ˜ O (ϵ−3.5) stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as stochastic gradients. This improves upon the ˜ O (ϵ−4) rate of stochastic gradient descent. Our rate matches the best-known result for finding local minima without requiring any delicate acceleration or variance-reduction techniques.", "We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve @math -approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.", "An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.", "Two classes of methods have been proposed for escaping from saddle points with one using the second-order information carried by the Hessian and the other adding the noise into the first-order information. The existing analysis for algorithms using noise in the first-order information is quite involved and hides the essence of added noise, which hinder further improvements of these algorithms. In this paper, we present a novel perspective of noise-adding technique, i.e., adding the noise into the first-order information can help extract the negative curvature from the Hessian matrix, and provide a formal reasoning of this perspective by analyzing a simple first-order procedure. More importantly, the proposed procedure enables one to design purely first-order stochastic algorithms for escaping from non-degenerate saddle points with a much better time complexity (almost linear time in terms of the problem's dimensionality). In particular, we develop a first-order stochastic algorithm based on our new technique and an existing algorithm that only converges to a first-order stationary point to enjoy a time complexity of @math for finding a nearly second-order stationary point @math such that @math and @math (in high probability), where @math denotes the objective function and @math is the dimensionality of the problem. To the best of our knowledge, this is the best theoretical result of first-order algorithms for stochastic non-convex optimization, which is even competitive with if not better than existing stochastic algorithms hinging on the second-order information." ] }
1901.06827
2912004605
Loss functions with a large number of saddle points are one of the main obstacles to training many modern machine learning models. Gradient descent (GD) is a fundamental algorithm for machine learning and converges to a saddle point for certain initial data. We call the region formed by these initial values the "attraction region." For quadratic functions, GD converges to a saddle point if the initial data is in a subspace of up to n-1 dimensions. In this paper, we prove that a small modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher, et al, arXiv:1806.06317] contributes to avoiding saddle points without sacrificing the convergence rate of GD. In particular, we show that the dimension of the LSGD's attraction region is at most floor((n-1) 2) for a class of quadratic functions which is significantly smaller than GD's (n-1)-dimensional attraction region.
Since the computation of the Hessian is often too expensive in practice, algorithms without second-order information are very desirable. @cite_23 proved that stochastic gradient descent (SGD) can find local minima of strict saddle functions in polynomial time. Levy @cite_15 showed that noisy normalized gradient descent can converge faster than SGD. @cite_9 proposed the perturbed gradient descent (PGD) and showed that, with an additional cost depending poly-logarithmically on the problem dimension, PGD will converge to an @math -second-order stationary point of an @math -smooth function with arbitrarily high probability. By combining PGD with the heavy ball method, @cite_6 showed that PGD with momentum can converge faster. @cite_4 showed that PGD or noisy GD can escape saddle point faster. More exciting work on avoiding saddle points cannot be discussed here, but we summarize some representative work in Table .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_6", "@cite_23", "@cite_15" ], "mid": [ "2963416883", "2769394111", "2953242304", "2767752488", "2592651140" ], "abstract": [ "Although gradient descent (GD) almost always escapes saddle points asymptotically [, 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [, 2015, , 2017] is not slowed down by saddle points—it can find an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justifies the importance of adding perturbations for efficient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical findings.", "Nesterov's accelerated gradient descent (AGD), an instance of the general family of \"momentum methods\", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stationary point in @math iterations, faster than the @math iterations required by GD. To the best of our knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first-order stationary point. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases per step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization.", "We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages.", "Two classes of methods have been proposed for escaping from saddle points with one using the second-order information carried by the Hessian and the other adding the noise into the first-order information. The existing analysis for algorithms using noise in the first-order information is quite involved and hides the essence of added noise, which hinder further improvements of these algorithms. In this paper, we present a novel perspective of noise-adding technique, i.e., adding the noise into the first-order information can help extract the negative curvature from the Hessian matrix, and provide a formal reasoning of this perspective by analyzing a simple first-order procedure. More importantly, the proposed procedure enables one to design purely first-order stochastic algorithms for escaping from non-degenerate saddle points with a much better time complexity (almost linear time in terms of the problem's dimensionality). In particular, we develop a first-order stochastic algorithm based on our new technique and an existing algorithm that only converges to a first-order stationary point to enjoy a time complexity of @math for finding a nearly second-order stationary point @math such that @math and @math (in high probability), where @math denotes the objective function and @math is the dimensionality of the problem. To the best of our knowledge, this is the best theoretical result of first-order algorithms for stochastic non-convex optimization, which is even competitive with if not better than existing stochastic algorithms hinging on the second-order information.", "This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost \"dimension-free\"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community." ] }
1901.06827
2912004605
Loss functions with a large number of saddle points are one of the main obstacles to training many modern machine learning models. Gradient descent (GD) is a fundamental algorithm for machine learning and converges to a saddle point for certain initial data. We call the region formed by these initial values the "attraction region." For quadratic functions, GD converges to a saddle point if the initial data is in a subspace of up to n-1 dimensions. In this paper, we prove that a small modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher, et al, arXiv:1806.06317] contributes to avoiding saddle points without sacrificing the convergence rate of GD. In particular, we show that the dimension of the LSGD's attraction region is at most floor((n-1) 2) for a class of quadratic functions which is significantly smaller than GD's (n-1)-dimensional attraction region.
@cite_17 recently proposed the LSGD based on the theory of Hamilton-Jacobi partial differential equations. LSGD replaces the gradient by the Laplacian smoothed surrogate which can be computed efficiently with the Thomas algorithm or the Fast Fourier Transform (FFT). LSGD can reduce the variance of stochastic gradient on-the-fly, avoid spurious local minima, and leads to better generalization in training a large number of classical machine learning and deep learning models.
{ "cite_N": [ "@cite_17" ], "mid": [ "2808102858" ], "abstract": [ "We propose a very simple modification of gradient descent and stochastic gradient descent. We show that when applied to a variety of machine learning models including softmax regression, convolutional neural nets, generative adversarial nets, and deep reinforcement learning, this very simple surrogate can dramatically reduce the variance and improve the accuracy of the generalization. The new algorithm, (which depends on one nonnegative parameter) when applied to non-convex minimization, tends to avoid sharp local minima. Instead it seeks somewhat flatter local (and often global) minima. The method only involves preconditioning the gradient by the inverse of a tri-diagonal matrix that is positive definite. The motivation comes from the theory of Hamilton-Jacobi partial differential equations. This theory demonstrates that the new algorithm is almost the same as doing gradient descent on a new function which (a) has the same global minima as the original function and (b) is \"more convex\". Again, the programming effort in doing this is minimal, in cost, complexity and effort. We implement our algorithm into both PyTorch and Tensorflow platforms, which will be made publicly available." ] }
1901.06773
2912976659
Typically, Ultra-deep neural network(UDNN) tends to yield high-quality model, but its training process is usually resource intensive and time-consuming. Modern GPU's scarce DRAM capacity is the primary bottleneck that hinders the trainability and the training efficiency of UDNN. In this paper, we present "AccUDNN", an accelerator that aims to make the utmost use of finite GPU memory resources to speed up the training process of UDNN. AccUDNN mainly includes two modules: memory optimizer and hyperparameter tuner. Memory optimizer develops a performance-model guided dynamic swap out in strategy, by offloading appropriate data to host memory, GPU memory footprint can be significantly slashed to overcome the restriction of trainability of UDNN. After applying the memory optimization strategy, hyperparameter tuner is designed to explore the efficiency-optimal minibatch size and the matched learning rate. Evaluations demonstrate that AccUDNN cuts down the GPU memory requirement of ResNet-152 from more than 24GB to 8GB. In turn, given 12GB GPU memory budget, the efficiency-optimal minibatch size can reach 4.2x larger than original Caffe. Benefiting from better utilization of single GPU's computing resources and fewer parameter synchronization of large minibatch size, 7.7x speed-up is achieved by 8 GPUs' cluster without any communication optimization and no accuracy losses.
The former avenue has already explored by IT companies with abundant hardware resources. @cite_6 Facebook's experiment employs 256 GPUs to train ResNet-50 with a minibatch size of 8192 and finishes in 1 hour. @cite_22 extends Facebook's experiment to 1024 GPUs and finishes in 15 minutes. However, the later avenue hasn't been fully studied due to the limitation of GPU memory capacity, in the above-mentioned two experiments, minibatch size on per GPU is set as 32 only. Given a fixed number of servers, the smaller minibatch size processed by each server is, the more frequent global parameter synchronization among cluster is. With common commodity Ethernet 100x slower than physical memory, the time cost of inter-machine communication among dozens or hundreds of servers usually leads to diminishing returns.
{ "cite_N": [ "@cite_22", "@cite_6" ], "mid": [ "2769856846", "2622263826" ], "abstract": [ "We demonstrate that training ResNet-50 on ImageNet for 90 epochs can be achieved in 15 minutes with 1024 Tesla P100 GPUs. This was made possible by using a large minibatch size of 32k. To maintain accuracy with this large minibatch size, we employed several techniques such as RMSprop warm-up, batch normalization without moving averages, and a slow-start learning rate schedule. This paper also describes the details of the hardware and software of the system used to achieve the above performance.", "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves 90 scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency." ] }
1901.06654
2911942780
Many biological data analysis processes like Cytometry or Next Generation Sequencing (NGS) produce massive amounts of data which needs to be processed in batches for down-stream analysis. Such datasets are prone to technical variations due to difference in handling the batches possibly at different times, by different experimenters or under other different conditions. This adds variation to the batches coming from the same source sample. These variations are known as Batch Effects. It is possible that these variations and natural variations due to biology confound but such situations can be avoided by performing experiments in a carefully planned manner. Batch effects can hamper down-stream analysis and may also cause results to be inconclusive. Thus, it is essential to correct for these effects. Some recent methods propose deep learning based solution to solve this problem. We demonstrate that this can be solved using a novel Generative Adversarial Networks (GANs) based framework. The advantage of using this framework over other prior approaches is that here we do not require to choose a reproducing kernel and define its parameters.We demonstrate results of our framework on a Mass Cytometry dataset.
Some of the recent deep learning based methods to solve this problem utilize residual networks to learn a near identity mapping from source to target by optimizing the Maximum Mean Discrepancy (MMD) between the transformed source and original target @cite_10 @cite_7 @cite_1 . MMD is one the several methods used to quantify the distance between two continuous distributions. It uses the distance measure between the means of distribution in some transformed space as a proxy to distance measure between distributions in original space. Let @math and @math be two distributions over set @math , let @math be a transformation from @math to reproducing kernel Hilbert space @math then MMD is defined as MMD depends upon the choice of reproducing kernel and hence one needs to devise a method to find the optimum kernel parameters. We propose a solution based on a Generative Adversarial Network (GAN) and report our results on a mass cytometry dataset which was used for a similar study before. Since our method does not involve reproducing kernel, we don't have to discover and specify kernel parameters explicitly.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_7" ], "mid": [ "2962892300", "2962919088", "2950863313" ], "abstract": [ "Generative moment matching network (GMMN) is a deep generative model that differs from Generative Adversarial Network (GAN) by replacing the discriminator in GAN with a two-sample test based on kernel maximum mean discrepancy (MMD). Although some theoretical guarantees of MMD have been studied, the empirical performance of GMMN is still not as competitive as that of GAN on challenging and large benchmark datasets. The computational efficiency of GMMN is also less desirable in comparison with GAN, partially due to its requirement for a rather large batch size during the training. In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN. The new approach combines the key ideas in both GMMN and GAN, hence we name it MMD-GAN. The new distance measure in MMD-GAN is a meaningful loss that enjoys the advantage of weak @math topology and can be optimized via gradient descent with relatively small batch sizes. In our evaluation on multiple benchmark datasets, including MNIST, CIFAR-10, CelebA and LSUN, the performance of MMD-GAN significantly outperforms GMMN, and is competitive with other representative GAN works.", "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramer GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier." ] }
1901.06610
2914913241
Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.
In the literature exists a variety of methods for document and text classification. More recent works employed deep learning methods. A hierarchical neural architecture was proposed by @cite_15 , whose structure mirrors the hierarchical structure of documents. The intuition underlying the model is that not all parts of a document are equally relevant to represent the document. Moreover, determining the relevant sections involves modeling the interactions among the words not just their presence in the text.
{ "cite_N": [ "@cite_15" ], "mid": [ "2788667846" ], "abstract": [ "Text classification to a hierarchical taxonomy of topics is a common and practical problem. Traditional approaches simply use bag-of-words and have achieved good results.However, when there are a lot of labels with different topical granularities, bag-of-words representation may not be enough.Deep learning models have been proven to be effective to automatically learn different levels of representations for image data.It is interesting to study what is the best way to represent texts.In this paper, we propose a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph.Graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics.CNN models have the advantage of learning different level of semantics.To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels.Our results on both RCV1 and NYTimes datasets show that we can significantly improve large-scale hierarchical text classification over traditional hierarchical text classification and existing deep models." ] }
1901.06610
2914913241
Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.
A problem in approaches such as word2vec @cite_2 is that the model does not consider the morphology of the words, applying a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. These languages contain many word forms that rarely occur in the training corpus, making it difficult to learn good word representations. However, many word formations follow the same language rules, and it is possible to improve vector representations for morphologically rich languages by using the character level information. In @cite_20 a solution was proposed by learning representations for character -grams, and representing words as the sum of the -gram vectors. The mentioned paper introduces an extension of the continuous skip-gram model @cite_9 by taking into account subword information.
{ "cite_N": [ "@cite_9", "@cite_20", "@cite_2" ], "mid": [ "2493916176", "2952566282", "2153579005" ], "abstract": [ "Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.", "Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character @math -grams. A vector representation is associated to each character @math -gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1901.06610
2914913241
Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.
In @cite_19 was introduced the Dynamic Convolutional Neural Network (DCNN) for the semantic modeling of sentences. It uses a global pooling operation over linear sequences named -Max Pooling and applied in the network after the last convolutional layer to guarantees that the input to the fully connected layers is independent of the length of the input sentence. The k-max pooling operation makes it possible to pool the most active features in a sequence.
{ "cite_N": [ "@cite_19" ], "mid": [ "2120615054" ], "abstract": [ "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline." ] }
1901.06610
2914913241
Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.
A Gated Recurrent Unit - GRU more straightforward version of computing and implementing then LSTM was proposed by @cite_14 . It has two gates (reset and update) that effectively allows the hidden state to drop any information considered irrelevant, allowing a more compact representation. The GRU works similarly to the memory cell in the LSTM network and helps the RNN to remember long-term information.
{ "cite_N": [ "@cite_14" ], "mid": [ "2319453305" ], "abstract": [ "Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MGU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MGU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MGU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically." ] }
1901.06610
2914913241
Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.
A systematic evaluation of generic convolutional and recurrent architectures for sequence modeling, motivated by some recent results with convolutional neural networks is presented in @cite_5 . These results suggest that convolutional architectures outperform the recurrent neural networks on tasks such as audio synthesis and machine translation. They also presented the Temporal Convolutional Network (TCN).
{ "cite_N": [ "@cite_5" ], "mid": [ "2890832667" ], "abstract": [ "Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture. Additionally, RCRN achieves state-of-the-art results on several well-established datasets." ] }
1901.06631
2950162495
Community detection refers to the task of discovering groups of vertices sharing similar properties or functions so as to understand the network data. With the recent development of deep learning, graph representation learning techniques are also utilized for community detection. However, the communities can only be inferred by applying clustering algorithms based on learned vertex embeddings. These general cluster algorithms like K-means and Gaussian Mixture Model cannot output much overlapped communities, which have been proved to be very common in many real-world networks. In this paper, we propose CommunityGAN, a novel community detection framework that jointly solves overlapping community detection and graph representation learning. First, unlike the embedding of conventional graph representation learning algorithms where the vector entry values have no specific meanings, the embedding of CommunityGAN indicates the membership strength of vertices to communities. Second, a specifically designed Generative Adversarial Net (GAN) is adopted to optimize such embedding. Through the minimax competition between the motif-level generator and discriminator, both of them can alternatively and iteratively boost their performance and finally output a better community structure. Extensive experiments on synthetic data and real-world tasks demonstrate that CommunityGAN achieves substantial community detection performance gains over the state-of-the-art methods.
Many community detection algorithms have been proposed from different perspectives. One direction is to design some measure of the quality of a community like modularity, and community structure can be uncovered by optimizing such measures @cite_15 @cite_18 . Another direction is to adopt the generative models to describe the generation of the graphs, and the communities can be inferred by fitting graphs to such models @cite_19 @cite_35 . Moreover, some models focus on the graph adjacency matrix and output the relationship between vertices and communities by adopting matrix factorization algorithms on the graph adjacency matrix @cite_21 @cite_11 . These models often consider the dense community overlapping problem and detect overlapping communities. However, the performance of these methods are restricted by performing pair reconstruction with bi-linear models.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_21", "@cite_19", "@cite_15", "@cite_11" ], "mid": [ "2079149066", "1976160665", "2963300816", "1526954821", "2167026441", "362613609" ], "abstract": [ "Four major factors govern the intricacies of community extraction in networks: (1) the literature offers a multitude of disparate community detection algorithms whose output exhibits high structural variability across the collection, (2) communities identified by algorithms may differ structurally from real communities that arise in practice, (3) there is no consensus characterizing how to discriminate communities from noncommunities, and (4) the application domain includes a wide variety of networks of fundamentally different natures. In this article, we present a class separability framework to tackle these challenges through a comprehensive analysis of community properties. Our approach enables the assessment of the structural dissimilarity among the output of multiple community detection algorithms and between the output of algorithms and communities that arise in practice. In addition, our method provides us with a way to organize the vast collection of community detection algorithms by grouping those that behave similarly. Finally, we identify the most discriminative graph-theoretical properties of community signature and the small subset of properties that account for most of the biases of the different community detection algorithms. We illustrate our approach with an experimental analysis, which reveals nuances of the structure of real and extracted communities. In our experiments, we furnish our framework with the output of 10 different community detection procedures, representative of categories of popular algorithms available in the literature, applied to a diverse collection of large-scale real network datasets whose domains span biology, online shopping, and social systems. We also analyze communities identified by annotations that accompany the data, which reflect exemplar communities in various domain. We characterize these communities using a broad spectrum of community properties to produce the different structural classes. As our experiments show that community structure is not a universal concept, our framework enables an informed choice of the most suitable community detection method for identifying communities of a specific type in a given network and allows for a comparison of existing community detection algorithms while guiding the design of new ones.", "Graph clustering has been widely applied in exploring regularities emerging in relational data. Recently, the rapid development of network theory correlates graph clustering with the detection of community structure, a common and important topological characteristic of networks. Most existing methods investigate the community structure at a single topological scale. However, as shown by empirical studies, the community structure of real world networks often exhibits multiple topological descriptions, corresponding to the clustering at different resolutions. Furthermore, the detection of multiscale community structure is heavily affected by the heterogeneous distribution of node degree. It is very challenging to detect multiscale community structure in heterogeneous networks. In this paper, we propose a novel, unified framework for detecting community structure from the perspective of dimensionality reduction. Based on the framework, we first prove that the well-known Laplacian matrix for network partition and the widely-used modularity matrix for community detection are two kinds of covariance matrices used in dimensionality reduction. We then propose a novel method to detect communities at multiple topological scales within our framework. We further show that existing algorithms fail to deal with heterogeneous node degrees. We develop a novel method to handle heterogeneity of networks by introducing a rescaling transformation into the covariance matrices in our framework. Extensive tests on real world and artificial networks demonstrate that the proposed correlation matrices significantly outperform Laplacian and modularity matrices in terms of their ability to identify multiscale community structure in heterogeneous networks.", "Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced by (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is fast and is based on simple linear algebraic operations, e.g., singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. As an important special case, our results match the best known scaling requirements for the (homogeneous) stochastic block model.", "We consider the problem of community detection in the Stochastic Block Model with a finite number @math of communities of sizes linearly growing with the network size @math . This model consists in a random graph such that each pair of vertices is connected independently with probability @math within communities and @math across communities. One observes a realization of this random graph, and the objective is to reconstruct the communities from this observation. We show that under spectral algorithms, the number of misclassified vertices does not exceed @math with high probability as @math grows large, whenever @math , @math and where @math and @math denote the (fixed) proportions of vertices in the two smallest communities. In view of recent work by and , this establishes that the proposed spectral algorithms are able to exactly recover communities whenever this is at all possible in the case of networks with two communities with equal sizes. We conjecture that condition (1) is actually necessary to obtain less than @math misclassified vertices asymptotically, which would establish the optimality of spectral method in more general scenarios.", "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.", "Community detection is an important technique to understand structures and patterns in complex networks. Recently, overlapping community detection becomes a trend due to the ubiquity of overlapping and nested communities in real world. However, existing approaches have ignored the use of implicit link preference information, i.e., links can reflect a node's preference on the targets of connections it wants to build. This information has strong impact on community detection since a node prefers to build links with nodes inside its community than those outside its community. In this paper, we propose a preference-based nonnegative matrix factorization (PNMF) model to incorporate implicit link preference information. Unlike conventional matrix factorization approaches, which simply approximate the original adjacency matrix in value, our model maximizes the likelihood of the preference order for each node by following the intuition that a node prefers its neighbors than other nodes. Our model overcomes the indiscriminate penalty problem in which non-linked pairs inside one community are equally penalized in objective functions as those across two communities. We propose a learning algorithm which can learn a node-community membership matrix via stochastic gradient descent with bootstrap sampling. We evaluate our PNMF model on several real-world networks. Experimental results show that our model outperforms state-of-the-art approaches and can be applied to large datasets." ] }
1901.06958
2911953725
Surface Electromyography (sEMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Experiments on sparse and High-Density (HD) sEMG datasets validate that our approach outperforms state-of-the-art methods.
From the perspective of contraction pattern, hand gestures can determine muscles to be contracted in an isotonic, isometric or mixed pattern. Isotonic contractions involve muscular contractions against resistance in which the length of the muscle changes. Contrary to isotonic contractions, isometric contractions create no change in muscle length but tension and energy are fluctuating. An isometric contraction is typically performed against an immovable object @cite_16 .
{ "cite_N": [ "@cite_16" ], "mid": [ "2066864006" ], "abstract": [ "Abstract This article reports an experimental study that aimed to quantitatively analyze motion coordination patterns across digits 2–5 (index to little finger), and examine the kinematic synergies during manipulative and gestic acts. Twenty-eight subjects (14 males and 14 females) performed two types of tasks, both right-handed: (1) cylinder-grasping that involved concurrent voluntary flexion of digits 2–5, and (2) voluntary flexion of individual fingers from digit 2 to 5 (i.e., one at a time). A five-camera opto-electronic motion capture system measured trajectories of 21 miniature reflective markers strategically placed on the dorsal surface landmarks of the hand. Joint angular profiles for 12 involved flexion–extension degrees of freedom (DOF's) were derived from the measured coordinates of surface markers. Principal components analysis (PCA) was used to examine the temporal covariation between joint angles. A mathematical modeling procedure, based on hyperbolic tangent functions, characterized the sigmoidal shaped angular profiles with four kinematically meaningful parameters. The PCA results showed that for all the movement trials ( n =280), two principal components accounted for at least 98 of the variance. The angular profiles ( n =2464) were accurately characterized, with the mean (±SD) coefficient of determination ( R 2 ) and root-mean-square-error (RMSE) being 0.95 (±0.12) and 1.03° (±0.82°), respectively. The resulting parameters which quantified both the spatial and temporal aspects of angular profiles revealed stereotypical patterns including a predominant (87 of all trials) proximal-to-distal flexion sequence and characteristic interdependence – involuntary joint flexion induced by the voluntarily flexed joint. The principal components' weights and the kinematic parameters also exhibited qualitatively similar variation patterns. Motor control interpretations and new insights regarding the underlying synergistic mechanisms, particularly in relation to previous findings on force synergies, are discussed." ] }
1901.06958
2911953725
Surface Electromyography (sEMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Experiments on sparse and High-Density (HD) sEMG datasets validate that our approach outperforms state-of-the-art methods.
Muscles generate electric voltage during contraction detraction. EMG detectors measure this signal through electrodes that are attached to the skin. A digital-analogue conversion is performed with a sampling rate of 100 up to 2000 Hz and the outcome is usually normalized into a range of [-1.0, 1.0]. The typical bandwidth of this signal is 5-450 Hz @cite_2 . This set of time series (one per each pair of electrodes) represents usually the input for gesture detection algorithms.
{ "cite_N": [ "@cite_2" ], "mid": [ "2591690220" ], "abstract": [ "Advanced forearm prosthetic devices employ classifiers to recognize different electromyography (EMG) signal patterns, in order to identify the user’s intended motion gesture. The classification accuracy is one of the main determinants of real-time controllability of a prosthetic limb and hence the necessity to achieve as high an accuracy as possible. In this paper, we study the effects of the temporal and spatial information provided to the classifier on its off-line performance and analyze their inter-dependencies. EMG data associated with seven practical hand gestures were recorded from partial-hand and trans-radial amputee volunteers as well as able-bodied volunteers. An extensive investigation was conducted to study the effect of analysis window length, window overlap, and the number of electrode channels on the classification accuracy as well as their interactions. Our main discoveries are that the effect of analysis window length on classification accuracy is practically independent of the number of electrodes for all participant groups; window overlap has no direct influence on classifier performance, irrespective of the window length, number of channels, or limb condition; the type of limb deficiency and the existing channel count influence the reduction in classification error achieved by adding more number of channels; partial-hand amputees outperform trans-radial amputees, with classification accuracies of only 11.3 below values achieved by able-bodied volunteers." ] }
1901.06958
2911953725
Surface Electromyography (sEMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Experiments on sparse and High-Density (HD) sEMG datasets validate that our approach outperforms state-of-the-art methods.
The intra-session gesture recognition have been extensively researched. Existing sEMG-based solutions utilizes time domain, frequency domain, and time-frequency domain features. Many researchers focused on presenting new sEMG features based on their domain knowledge or analyzing existing features to propose new feature sets. Traditional machine learning classifiers have been employed to recognize sEMG-based gestures, such as k-Nearest Neighbor (kNN) @cite_10 , Linear Discriminate Analysis (LDA) @cite_1 , Hidden Markov Model (HMM) @cite_5 , and Support Vector Machine (SVM) @cite_10 @cite_14 . The Convolutional Neural Network (CNN) architecture is the most widely used deep learning technique for sEMG-based gesture recognition. @cite_4 provided a novel CNN model to extract spatial information from the instantaneous sEMG images and achieved state-of-the-art performance. @cite_7 applies a novel hybrid CNN-RNN architecture with superior results in the intra-session scenario.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_1", "@cite_5", "@cite_10" ], "mid": [ "2898716605", "2295038166", "2415469094", "2119656522", "2112480082", "2110910614" ], "abstract": [ "The surface electromyography (sEMG)-based gesture recognition with deep learning approach plays an increasingly important role in human-computer interaction. Existing deep learning architectures are mainly based on Convolutional Neural Network (CNN) architecture which captures spatial information of electromyogram signal. Motivated by the sequential nature of electromyogram signal, we propose an attention-based hybrid CNN and RNN (CNN-RNN) architecture to better capture temporal properties of electromyogram signal for gesture recognition problem. Moreover, we present a new sEMG image representation method based on a traditional feature vector which enables deep learning architectures to extract implicit correlations between different channels for sparse multi-channel electromyogram signal. Extensive experiments on five sEMG benchmark databases show that the proposed method outperforms all reported state-of-the-art methods on both sparse multi-channel and high-density sEMG databases. To compare with the existing works, we set the window length to 200ms for NinaProDB1 and NinaProDB2, and 150ms for BioPatRec sub-database, CapgMyo sub-database, and csl-hdemg databases. The recognition accuracies of the aforementioned benchmark databases are 87.0 , 82.2 , 94.1 , 99.7 and 94.5 , which are 9.2 , 3.5 , 1.2 , 0.2 and 5.2 higher than the state-of-the-art performance, respectively.", "This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network ( DBN ) to handle skelet al dynamics, and a 3D Convolutional Neural Network ( 3DCNN ) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.", "Temporal dynamics of postures over time is crucial for sequence-based action recognition. Human actions can be represented by the corresponding motions of articulated skeleton. Most of the existing approaches for skeleton based action recognition model the spatial-temporal evolution of actions based on hand-crafted features. As a kind of hierarchically adaptive filter banks, Convolutional Neural Network (CNN) performs well in representation learning. In this paper, we propose an end-to-end hierarchical architecture for skeleton based action recognition with CNN. Firstly, we represent a skeleton sequence as a matrix by concatenating the joint coordinates in each instant and arranging those vector representations in a chronological order. Then the matrix is quantified into an image and normalized to handle the variable-length problem. The final image is fed into a CNN model for feature extraction and recognition. For the specific structure of such images, the simple max-pooling plays an important role on spatial feature selection as well as temporal frequency adjustment, which can obtain more discriminative joint information for different actions and meanwhile address the variable-frequency problem. Experimental results demonstrate that our method achieves the state-of-art performance with high computational efficiency, especially surpassing the existing result by more than 15 percentage on the challenging ChaLearn gesture recognition dataset.", "This paper presents a novel and real-time system for interaction with an application or video game via hand gestures. Our system includes detecting and tracking bare hand in cluttered background using skin detection and hand posture contour comparison algorithm after face subtraction, recognizing hand gestures via bag-of-features and multiclass support vector machine (SVM) and building a grammar that generates gesture commands to control an application. In the training stage, after extracting the keypoints for every training image using the scale invariance feature transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multiclass SVM to build the training classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using our algorithm, then, the keypoints are extracted for every small image that contains the detected hand gesture only and fed into the cluster model to map them into a bag-of-words vector, which is finally fed into the multiclass SVM training classifier to recognize the hand gesture.", "For one-shot learning gesture recognition, two important challenges are: how to extract distinctive features and how to learn a discriminative model from only one training sample per gesture class. For feature extraction, a new spatio-temporal feature representation called 3D enhanced motion scale-invariant feature transform (3D EMoSIFT) is proposed, which fuses RGB-D data. Compared with other features, the new feature set is invariant to scale and rotation, and has more compact and richer visual representations. For learning a discriminative model, all features extracted from training samples are clustered with the k-means algorithm to learn a visual codebook. Then, unlike the traditional bag of feature (BoF) models using vector quantization (VQ) to map each feature into a certain visual codeword, a sparse coding method named simulation orthogonal matching pursuit (SOMP) is applied and thus each feature can be represented by some linear combination of a small number of codewords. Compared with VQ, SOMP leads to a much lower reconstruction error and achieves better performance. The proposed approach has been evaluated on ChaLearn gesture database and the result has been ranked amongst the top best performing techniques on ChaLearn gesture challenge (round 2).", "We used Fisher linear discriminant analysis (LDA), static neural networks (NN), and focused time delay neural networks (TDNN) for gesture recognition. Gestures were collected in form of acceleration signals along three axes from six participants. A sports watch containing a 3-axis accelerometer, was worn by the users, who performed four gestures. Each gesture was performed for ten seconds, at the speed of one gesture per second. User-dependent and user-independent k-fold cross validations were carried out to measure the classifier performance. Using first and second order statistical descriptors of acceleration signals from validation datasets, LDA and NN classifiers were able to recognize the gestures at an average rate of 86 and 97 (user-dependent) and 89 and 85 (user-independent), respectively. TDNNs proved to be the best, achieving near perfect classification rates both for user-dependent and user-independent scenarios, while operating directly on the acceleration signals alleviating the need for explicit feature extraction." ] }
1901.06958
2911953725
Surface Electromyography (sEMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Experiments on sparse and High-Density (HD) sEMG datasets validate that our approach outperforms state-of-the-art methods.
LSTM units contain a set of gates that are used to control the stages when information enters the memory (input gate: @math ), when it's output (output gate: @math ) and when it's forgotten (forget gate: @math ) as seen in Eq. ). This architecture allows the neural network to learn longer-term dependencies and they are widely used to analyze time-series data. @cite_3 In Fig. yellow rectangles represent a neural network layer, circles are point-wise operations and arrows denote the flow of data.
{ "cite_N": [ "@cite_3" ], "mid": [ "2787284976" ], "abstract": [ "Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. It aims to use gates to control information flow (e.g., whether to skip some information or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal. In this paper, we propose a new way for LSTM training, which pushes the output values of the gates towards 0 or 1. By doing so, we can better control the information flow: the gates are mostly open or closed, instead of in a middle state, which makes the results more interpretable. Empirical studies show that (1) Although it seems that we restrict the model capacity, there is no performance drop: we achieve better or comparable performances due to its better generalization ability; (2) The outputs of gates are not sensitive to their inputs: we can easily compress the LSTM unit in multiple ways, e.g., low-rank approximation and low-precision approximation. The compressed models are even better than the baseline models without compression." ] }
1901.06355
2909083954
Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis. Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalies, where the reconstruction error exceeds some threshold. Here we analyze a fundamental problem of this approach when the training set is contaminated with a small fraction of outliers. We find that continued training of autoencoders inevitably reduces the reconstruction error of outliers, and hence degrades the anomaly detection performance. In order to counteract this effect, an adversarial autoencoder architecture is adapted, which imposes a prior distribution on the latent representation, typically placing anomalies into low likelihood-regions. Utilizing the likelihood model, potential anomalies can be identified and rejected already during training, which results in an anomaly detector that is significantly more robust to the presence of outliers during training.
Almost all approaches for anomaly detection with autoencoders require the training data to consist of normal examples only, but this alone is no guarantee for anomalies to have large reconstruction errors. Robust deep autoencoders @cite_12 address this issue by combining denoising autoencoders with robust PCA, thereby isolating noise and outliers from training of the reconstruction. The method achieves significantly better results in the presence of anomalies in the training set on MNIST. A combination of deep learning and kernel based methods for anomaly detection in high dimensional data was proposed by @cite_18 , who combine a Deep Belief Network for feature extraction, and a 1-class SVM for anomaly detection in the compressed latent space. Their method can deal with anomalies in the training data, but does not use this information to refine the training set. When also considering adversarial detection, @cite_24 have proposed density based measures in a ConvNet to identify data points that lie outside the data manifold as potential adversarial examples. They increase the robustness of their method by adding a Bayesian uncertainty estimate, which handles complementary situations.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_12" ], "mid": [ "2340896621", "2922995320", "2127979711" ], "abstract": [ "High-dimensional problem domains pose significant challenges for anomaly detection. The presence of irrelevant features can conceal the presence of anomalies. This problem, known as the 'curse of dimensionality', is an obstacle for many anomaly detection techniques. Building a robust anomaly detection model for use in high-dimensional spaces requires the combination of an unsupervised feature extractor and an anomaly detector. While one-class support vector machines are effective at producing decision surfaces from well-behaved feature vectors, they can be inefficient at modelling the variation in large, high-dimensional datasets. Architectures such as deep belief networks (DBNs) are a promising technique for learning robust features. We present a hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN. Since a linear kernel can be substituted for nonlinear ones in our hybrid model without loss of accuracy, our model is scalable and computationally efficient. The experimental results show that our proposed model yields comparable anomaly detection performance with a deep autoencoder, while reducing its training and testing time by a factor of 3 and 1000, respectively. HighlightsWe use a combination of a one-class SVM and deep learning.In our model linear kernels can be used rather than nonlinear ones.Our model delivers a comparable accuracy with a deep autoencoder.Our model executes 3times faster in training and 1000 faster than a deep autoencoder.", "We propose the Autoencoding Binary Classifiers (ABC), a novel supervised anomaly detector based on the Autoencoder (AE). There are two main approaches in anomaly detection: supervised and unsupervised. The supervised approach accurately detects the known anomalies included in training data, but it cannot detect the unknown anomalies. Meanwhile, the unsupervised approach can detect both known and unknown anomalies that are located away from normal data points. However, it does not detect known anomalies as accurately as the supervised approach. Furthermore, even if we have labeled normal data points and anomalies, the unsupervised approach cannot utilize these labels. The ABC is a probabilistic binary classifier that effectively exploits the label information, where normal data points are modeled using the AE as a component. By maximizing the likelihood, the AE in the proposed ABC is trained to minimize the reconstruction error for normal data points, and to maximize it for known anomalies. Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points. Experimental results show that the ABC achieves higher detection performance than existing supervised and unsupervised methods.", "This paper proposes to use autoencoders with nonlinear dimensionality reduction in the anomaly detection task. The authors apply dimensionality reduction by using an autoencoder onto both artificial data and real data, and compare it with linear PCA and kernel PCA to clarify its property. The artificial data is generated from Lorenz system, and the real data is the spacecrafts' telemetry data. This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by extending them to denoising autoenconders. Moreover, autoencoders can be useful as nonlinear techniques without complex computation as kernel PCA requires. Finaly, the authors examine the learned features in the hidden layer of autoencoders, and present that autoencoders learn the normal state properly and activate differently with anomalous input." ] }
1901.06560
2914935482
There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don't explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs' use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societ al requirements and benefit.
In this work, we focus on the types of questions and explanations that explanatory DNN methods can answer. Recent work has looked at ways to correct neural network judgments @cite_12 and different ways to audit such networks by detecting biases @cite_7 . But these judgments are not enough to completely understand the model's decisions-making. Other work answers why questions by finding similar data points @cite_24 . Although these methods are clearly interpretable, they do not provide any unique insights into why the model made those decisions. Other work examining best practices for explanation @cite_18 provides a set of categories, but does not evaluate the questions that explanatory systems should be able to answer; which is necessary for policy makers and societ al trust in DNN decision processes.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_7", "@cite_12" ], "mid": [ "2785760873", "2953295770", "2775079417", "2240067561" ], "abstract": [ "Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.", "Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.", "Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.", "Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the “importance” of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance." ] }