aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1812.06876
2905564348
Recently advancements in sequence-to-sequence neural network architectures have led to an improved natural language understanding. When building a neural network-based Natural Language Understanding component, one main challenge is to collect enough training data. The generation of a synthetic dataset is an inexpensive and quick way to collect data. Since this data often has less variety than real natural language, neural networks often have problems to generalize to unseen utterances during testing. In this work, we address this challenge by using multi-task learning. We train out-of-domain real data alongside in-domain synthetic data to improve natural language understanding. We evaluate this approach in the domain of airline travel information with two synthetic datasets. As out-of-domain real data, we test two datasets based on the subtitles of movies and series. By using an attention-based encoder-decoder model, we were able to improve the F1-score over strong baselines from 80.76 to 84.98 in the smaller synthetic dataset.
Multi-task learning has been performed in many of machine learning applications, e. ,g., in facial landmark detection an application in the area of vision @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "1896424170" ], "abstract": [ "Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21]." ] }
1812.06898
2904840533
In distributed computing frameworks like MapReduce, Spark, and Dyrad, a coflow is a set of flows transferring data between two stages of a job. The job cannot start its next stage unless all flows in the coflow finish. To improve the execution performance of such a job, it is crucial to reduce the completion time of a coflow which can contribute more than 50 of the job completion time. While several schedulers have been proposed, we observe that routing, as a factor greatly impacting the Coflow Completion Time (CCT), has not been well considered. In this paper, we focus on the coflow scheduling problem and jointly consider routing and bandwidth allocation. We first provide an analytical solution to the problem of optimal bandwidth allocation with pre-determined routes. We then formulate the coflow scheduling problem as a Mixed Integer Non-linear Programming problem and present its relaxed convex optimization problem. We further propose two algorithms, CoRBA and its simplified version: CoRBA-fast, that jointly perform routing and bandwidth allocation for a given coflow while minimizes the CCT. Through both offline and online simulations, we demonstrate that CoRBA reduces the CCT by 40 -500 compared to the state-of-the-art algorithms. Simulation results also show that CoRBA-fast can be tens of times faster than all other algorithms with around 10 performance degradation compared to CoRBA, which makes the use of CoRBA-fast very applicable in practice.
Flow schedulers. Much research work has also been performed on reducing the average flow completion time @cite_1 @cite_5 @cite_10 @cite_11 @cite_16 @cite_4 . Rojas @cite_1 give a comprehensive survey on existing schemes for scheduling flows in data center networks. PDQ @cite_10 is a flow scheduling protocol which utilizes explicit rate control to allocate bandwidth to flows and enables flow preemption. pFabric @cite_11 is a datacenter transport design that decouples flow scheduling from rate control, in which flows are prioritized and switches implement a very simple priority-based scheduling dropping mechanism. RepFlow @cite_16 is a transport design that replicates each short flow. It transmits the replicated and original flows via different paths, which reduces the probability of experiencing long queueing delay and therefore decreases the flow completion time. PIAS @cite_4 is an information-agnostic flow scheduling scheme minimizing the FCT by mimicking the Shortest Job First strategy without any prior knowledge of incoming flows. While these existing schemes can reduce the FCT by using different strategies, they are not coflow-aware and therefore have different optimization objective compared to our algorithms.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_5", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "1657185548", "2064941967", "2101871381", "", "", "2117884704" ], "abstract": [ "Many existing data center network (DCN) flow scheduling schemes minimize flow completion times (FCT) based on prior knowledge of flows and custom switch functions, making them superior in performance but hard to use in practice. By contrast, we seek to minimize FCT with no prior knowledge and existing commodity switch hardware. To this end, we present PIAS, a DCN flow scheduling mechanism that aims to minimize FCT by mimicking Shortest Job First (SJF) on the premise that flow size is not known a priori. At its heart, PIAS leverages multiple priority queues available in existing commodity switches to implement a Multiple Level Feedback Queue (MLFQ), in which a PIAS flow is gradually demoted from higher-priority queues to lower-priority queues based on the number of bytes it has sent. As a result, short flows are likely to be finished in the first few high-priority queues and thus be prioritized over long flows in general, which enables PIAS to emulate SJF without knowing flow sizes beforehand. We have implemented a PIAS prototype and evaluated PIAS through both testbed experiments and ns- 2 simulations. We show that PIAS is readily deployable with commodity switches and backward compatible with legacy TCP IP stacks. Our evaluation results show that PIAS significantly outperforms existing information-agnostic schemes. For example, it reduces FCT by up to 50 and 40 over DCTCP [11] and L2DCT [27] respectively; and it only has a 4.9 performance gap to an ideal information-aware scheme, pFabric [13], for short flows under a production DCN workload.", "In this paper, we survey different existing schemes for the transmission of flows in Data Center Networks (DCNs). The transport of flows in DCNs must cope with the bandwidth demands of the traffic that a large number of data center applications generates and achieve high utilization of the data center infrastructure to make the data center financially viable. Traffic in DCNs roughly comprises short flows, which are generated by the Partition Aggregate model adopted by several applications and have sizes of a few kilobytes, and long flows, which are data for the operation and maintenance of the data center and have sizes on the order of megabytes. Short flows must be transmitted (or completed) as soon as possible or within a deadline, and long flows must be serviced with a minimum acceptable throughput. The coexistence of short and long flows may jeopardize achieving both performance objectives simultaneously. This challenge has motivated growing research on schemes for managing the transmission of flows in DCNs. We describe several recent schemes aimed at reducing the flow completion time in DCNs. We also present a summary of existing solutions for the incast traffic phenomenon. We provide a comparison and classification of the surveyed schemes, describe their advantages and disadvantages, and show the different trends for scheme design. For completeness, we describe some DCN architectures, discuss the traffic patterns of DCNs, and discuss why some existing versions of transport protocols may not be usable in DCNs. At the end, we discuss some of the identified research challenges.", "The soft real-time nature of large scale web applications in today's datacenters, combined with their distributed workflow, leads to deadlines being associated with the datacenter application traffic. A network flow is useful, and contributes to application throughput and operator revenue if, and only if, it completes within its deadline. Today's transport pro- tocols (TCP included), given their Internet origins, are agnostic to such flow deadlines. Instead, they strive to share network resources fairly. We show that this can hurt application performance. Motivated by these observations, and other (previously known) deficiencies of TCP in the datacenter environment, this paper presents the design and implementation of D3, a deadline-aware control protocol that is customized for the datacenter environment. D3 uses explicit rate control to apportion bandwidth according to flow deadlines. Evaluation from a 19-node, two-tier datacenter testbed shows that D3, even without any deadline information, easily outper- forms TCP in terms of short flow latency and burst tolerance. Further, by utilizing deadline information, D3 effectively doubles the peak load that the datacenter network cansupport.", "", "", "In this paper we present pFabric, a minimalistic datacenter transport design that provides near theoretically optimal flow completion times even at the 99th percentile for short flows, while still minimizing average flow completion time for long flows. Moreover, pFabric delivers this performance with a very simple design that is based on a key conceptual insight: datacenter transport should decouple flow scheduling from rate control. For flow scheduling, packets carry a single priority number set independently by each flow; switches have very small buffers and implement a very simple priority-based scheduling dropping mechanism. Rate control is also correspondingly simpler; flows start at line rate and throttle back only under high and persistent packet loss. We provide theoretical intuition and show via extensive simulations that the combination of these two simple mechanisms is sufficient to provide near-optimal performance." ] }
1812.07023
2905141912
Understanding audio-visual content and the ability to have an informative conversation about it have both been challenging areas for intelligent systems. The Audio Visual Scene-aware Dialog (AVSD) challenge, organized as a track of the Dialog System Technology Challenge 7 (DSTC7), proposes a combined task, where a system has to answer questions pertaining to a video given a dialogue with previous question-answer pairs and the video itself. We propose for this task a hierarchical encoder-decoder model which computes a multi-modal embedding of the dialogue context. It first embeds the dialogue history using two LSTMs. We extract video and audio frames at regular intervals and compute semantic features using pre-trained I3D and VGGish models, respectively. Before summarizing both modalities into fixed-length vectors using LSTMs, we use FiLM blocks to condition them on the embeddings of the current question, which allows us to reduce the dimensionality considerably. Finally, we use an LSTM decoder that we train with scheduled sampling and evaluate using beam search. Compared to the modality-fusing baseline model released by the AVSD challenge organizers, our model achieves a relative improvements of more than 16 , scoring 0.36 BLEU-4 and more than 33 , scoring 0.997 CIDEr.
Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge @cite_9 @cite_16 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "2729046720", "2328886022" ], "abstract": [ "Automated metrics such as BLEU are widely used in the machine translation literature. They have also been used recently in the dialogue community for evaluating dialogue response generation. However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting. Task-oriented dialogue responses are expressed on narrower domains and exhibit lower diversity. It is thus reasonable to think that these automated metrics would correlate well with human judgment in the task-oriented setting where the generation task consists of translating dialogue acts into a sentence. We conduct an empirical study to confirm whether this is the case. Our findings indicate that these automated metrics have stronger correlation with human judgments in the task-oriented setting compared to what has been observed in the non task-oriented setting. We also observe that these metrics correlate even better for datasets which provide multiple ground truth reference sentences. In addition, we show that some of the currently available corpora for task-oriented language generation can be solved with simple models and advocate for more challenging datasets.", "We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems." ] }
1812.06669
2904596301
As deep learning advances, algorithms of music composition increase in performance. However, most of the successful models are designed for specific musical structures. Here, we present BachProp, an algorithmic composer that can generate music scores in many styles given sufficient training data. To adapt BachProp to a broad range of musical styles, we propose a novel representation of music and train a deep network to predict the note transition probabilities of a given music corpus. In this paper, new music scores generated by BachProp are compared with the original corpora as well as with different network architectures and other related models. We show that BachProp captures important features of the original datasets better than other models and invite the reader to a qualitative comparison on a large collection of generated songs.
DeepBach @cite_13 is designed exclusively for songs with a constant number of voices (e.g. four voices for a typical Bach chorale) and a discretization of the rhythm into multiples of a base unit, e.g. 16 th notes. The model achieves good results not only in generating novel songs but allows also in reharmonizing given melodies while respecting user-provided meta-information like the temporal position of fermatas. The model works with a Gibbs-sampling-like procedure, where, for each voice and time step, one note is sampled from conditional distributions parameterized by deep neural networks. The conditioning is on the other voices in a time window surrounding the current time-step. Additionally a temporal backbone'' signals the position of the current 16 th note relative to quarter notes and other meta-information. A special hold symbol can also be sampled instead of a note, to represent notes with a duration longer than one time-step.
{ "cite_N": [ "@cite_13" ], "mid": [ "2963575853" ], "abstract": [ "This paper introduces DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. DeepBach's strength comes from the use of pseudo-Gibbs sampling coupled with an adapted representation of musical data. This is in contrast with many automatic music composition approaches which tend to compose music sequentially. Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score. We also provide a plugin on top of the MuseScore music editor making the interaction with Deep-Bach easy to use." ] }
1812.06544
2951085355
Human activity recognition based on video streams has received numerous attentions in recent years. Due to lack of depth information, RGB video based activity recognition performs poorly compared to RGB-D video based solutions. On the other hand, acquiring depth information, inertia etc. is costly and requires special equipment, whereas RGB video streams are available in ordinary cameras. Hence, our goal is to investigate whether similar or even higher accuracy can be achieved with RGB-only modality. In this regard, we propose a novel framework that couples skeleton data extracted from RGB video and deep Bidirectional Long Short Term Memory (BLSTM) model for activity recognition. A big challenge of training such a deep network is the limited training data, and exploring RGB-only stream significantly exaggerates the difficulty. We therefore propose a set of algorithmic techniques to train this model effectively, e.g., data augmentation, dynamic frame dropout and gradient injection. The experiments demonstrate that our RGB-only solution surpasses the state-of-the-art approaches that all exploit RGB-D video streams by a notable margin. This makes our solution widely deployable with ordinary cameras.
Human activity recognition has been extensively studied in the recent years @cite_29 @cite_21 @cite_3 . Most of state-of-the-art methods exact handcrafted features from RGB videos and rely on traditional shallow classifiers for activity classification @cite_35 @cite_18 @cite_30 @cite_11 @cite_37 . For example, @cite_35 present a method that identifies spatio-temporal interest points and classifies action by using SVMs. @cite_18 introduce the concept of motion context to capture spatio-temporal structure. Liu and Shah @cite_30 consider the correlation among features. @cite_11 propose to calculate the difference between subsequent frames to estimate the focus of attention. These methods often achieve very high accuracy. However, since handcrafted features are highly data dependent, these methods are not very robust to the change of environments. We instead utilize OpenPose to extract the salient skeleton features from raw RGB frames, which makes the proposed method less data dependent, robust to different environments and therefore widely deployable in real life applications.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_37", "@cite_29", "@cite_21", "@cite_3", "@cite_11" ], "mid": [ "2110142955", "2034328688", "45528431", "", "1551696353", "2963218601", "2274499208", "2136917337" ], "abstract": [ "In this paper, we present a novel approach for automatically learning a compact and yet discriminative appearance-based human action model. A video sequence is represented by a bag of spatiotemporal features called video-words by quantizing the extracted 3D interest points (cuboids) from the videos. Our proposed approach is able to automatically discover the optimal number of video-word clusters by utilizing maximization of mutual information(MMI). Unlike the k-means algorithm, which is typically used to cluster spatiotemporal cuboids into video words based on their appearance similarity, MMI clustering further groups the video-words, which are highly correlated to some group of actions. To capture the structural information of the learnt optimal video-word clusters, we explore the correlation of the compact video-word clusters. We use the modified correlogram, which is not only translation and rotation invariant, but also somewhat scale invariant. We extensively test our proposed approach on two publicly available challenging datasets: the KTH dataset and IXMAS multiview dataset. To the best of our knowledge, we are the first to try the bag of video-words related approach on the multiview dataset. We have obtained very impressive results on both datasets.", "Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.", "One of the key challenges in human action recognition from video sequences is how to model an action sufficiently. Therefore, in this paper we propose a novel motion-based representation called Motion Context (MC), which is insensitive to the scale and direction of an action, by employing image representation techniques. A MC captures the distribution of the motion words (MWs) over relative locations in a local region of the motion image (MI) around a reference point and thus summarizes the local motion information in a rich 3D MC descriptor. In this way, any human action can be represented as a 3D descriptor by summing up all the MC descriptors of this action. For action recognition, we propose 4 different recognition configurations: MW+pLSA, MW+SVM, MC+w 3-pLSA (a new direct graphical model by extending pLSA), and MC+SVM. We test our approach on two human action video datasets from KTH and Weizmann Institute of Science (WIS) and our performances are quite promising. For the KTH dataset, the proposed MC representation achieves the highest performance using the proposed w 3-pLSA. For the WIS dataset, the best performance of the proposed MC is comparable to the state of the art.", "", "Human action recognition has been an important topic in computer vision due to its many applications such as video surveillance, human machine interaction and video retrieval. One core problem behind these applications is automatically recognizing low-level actions and high-level activities of interest. The former is usually the basis for the latter. This survey gives an overview of the most recent advances in human action recognition during the past several years, following a well-formed taxonomy proposed by a previous survey [1]. From this state-of-the-art survey, researchers can view a panorama of progress in this area for future research.", "Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to humancomputer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader. We provide a detailed review of the work on human action recognition over the past decade.We refer to actions as meaningful human motions.Including Hand-crafted representations methods, we review the impact of Deep-nets on action recognition.We follow a systematic taxonomy to highlight the essence of both Hand-crafted and Deep-net solutions.We present a comparison of methods at their algorithmic level and performance.", "A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.", "Much of recent action recognition research is based on space-time interest points extracted from video using a Bag of Words (BOW) representation. It mainly relies on the discriminative power of individual local space-time descriptors, whilst ignoring potentially valuable information about the global spatio-temporal distribution of interest points. In this paper, we propose a novel action recognition approach which differs significantly from previous interest points based approaches in that only the global spatiotemporal distribution of the interest points are exploited. This is achieved through extracting holistic features from clouds of interest points accumulated over multiple temporal scales followed by automatic feature selection. Our approach avoids the non-trivial problems of selecting the optimal space-time descriptor, clustering algorithm for constructing a codebook, and selecting codebook size faced by previous interest points based methods. Our model is able to capture smooth motions, robust to view changes and occlusions at a low computation cost. Experiments using the KTH and WEIZMANN datasets demonstrate that our approach outperforms most existing methods." ] }
1812.06544
2951085355
Human activity recognition based on video streams has received numerous attentions in recent years. Due to lack of depth information, RGB video based activity recognition performs poorly compared to RGB-D video based solutions. On the other hand, acquiring depth information, inertia etc. is costly and requires special equipment, whereas RGB video streams are available in ordinary cameras. Hence, our goal is to investigate whether similar or even higher accuracy can be achieved with RGB-only modality. In this regard, we propose a novel framework that couples skeleton data extracted from RGB video and deep Bidirectional Long Short Term Memory (BLSTM) model for activity recognition. A big challenge of training such a deep network is the limited training data, and exploring RGB-only stream significantly exaggerates the difficulty. We therefore propose a set of algorithmic techniques to train this model effectively, e.g., data augmentation, dynamic frame dropout and gradient injection. The experiments demonstrate that our RGB-only solution surpasses the state-of-the-art approaches that all exploit RGB-D video streams by a notable margin. This makes our solution widely deployable with ordinary cameras.
In order to solve the aforementioned issues, skeleton information from RGB-D video has been widely studied to improve recognition accuracy @cite_10 @cite_40 @cite_27 @cite_13 @cite_17 . Observations from seminal work by Johansson @cite_19 suggests that a few movement of human joints is sufficient to recognize an action. Recently, @cite_28 propose a CNN based approach leveraging the skeleton data. In @cite_23 the authors propose hierarchical bidirectional Recurrent Neural Network (RNN) to classify the human actions. Methods proposed in @cite_4 and @cite_10 utilize skeleton data on three CNN streams that are pretrained on large ImageNet Dataset @cite_12 . @cite_32 use view invariant features from skeleton data to improve over @cite_4 and @cite_10 , and they used similar four stream pretrained models. All these methods utilize skeleton data, either extracted from depth data or Kinect. Inspired from these works, we adopt a bidirectional LSTM in our method; instead of extracting skeleton data from depth information as in other methods, we extract skeleton keypoints from RGB frames, which are available in ordinary digital cameras.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_10", "@cite_32", "@cite_19", "@cite_27", "@cite_40", "@cite_23", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2554408731", "2593146028", "2526041356", "2761860076", "2099634219", "", "", "1950788856", "", "2108598243", "" ], "abstract": [ "This letter presents an effective method to encode the spatiotemporal information of a skeleton sequence into color texture images, referred to as skeleton optical spectra, and employs convolutional neural networks (ConvNets) to learn the discriminative features for action recognition. Such spectrum representation makes it possible to use a standard ConvNet architecture to learn suitable “dynamic” features from skeleton sequences without training millions of parameters afresh and it is especially valuable when there is insufficient annotated training video data. Specifically, the encoding consists of four steps: mapping of joint distribution, spectrum coding of joint trajectories, spectrum coding of body parts, and joint velocity weighted saturation and brightness. Experimental results on three widely used datasets have demonstrated the efficacy of the proposed method.", "Sequence-based view invariant transform can effectively cope with view variations.Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in humancomputer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. Whats more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.", "Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition. How to effectively use ConvNets for video-based recognition is still an open problem. In this paper, we propose a compact, effective yet simple method to encode spatio-temporal information carried in 3D skeleton sequences into multiple 2D images, referred to as Joint Trajectory Maps (JTM), and ConvNets are adopted to exploit the discriminative features for real-time human action recognition. The proposed method has been evaluated on three public benchmarks, i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal human action dataset (UTD-MHAD) and achieved the state-of-the-art results.", "Motivated by the promising performance achieved by deep learning, an effective yet simple method is proposed to encode the spatio-temporal information of skeleton sequences into color texture images, referred to as joint distance maps (JDMs), and convolutional neural networks are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple person skeletons are encoded into color variations to capture temporal information. The efficacy of the proposed method has been verified by the state-of-the-art results on the large RGB+D Dataset and small UTD-MHAD Dataset in both single-view and cross-view settings.", "This paper reports the first phase of a research program on visual perception of motion patterns characteristic of living organisms in locomotion. Such motion patterns in animals and men are termed here as biological motion. They are characterized by a far higher degree of complexity than the patterns of simple mechanical motions usually studied in our laboratories. In everyday perceptions, the visual information from biological motion and from the corresponding figurative contour patterns (the shape of the body) are intermingled. A method for studying information from the motion pattern per se without interference with the form aspect was devised. In short, the motion of the living body was represented by a few bright spots describing the motions of the main joints. It is found that 10–12 such elements in adequate motion combinations in proximal stimulus evoke a compelling impression of human walking, running, dancing, etc. The kinetic-geometric model for visual vector analysis originally developed in the study of perception of motion combinations of the mechanical type was applied to these biological motion patterns. The validity of this model in the present context was experimentally tested and the results turned out to be highly positive.", "", "", "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "" ] }
1812.06576
2949720626
Person re-identification (ReID) aims to match people across multiple non-overlapping video cameras deployed at different locations. To address this challenging problem, many metric learning approaches have been proposed, among which triplet loss is one of the state-of-the-arts. In this work, we explore the margin between positive and negative pairs of triplets and prove that large margin is beneficial. In particular, we propose a novel multi-stage training strategy which learns incremental triplet margin and improves triplet loss effectively. Multiple levels of feature maps are exploited to make the learned features more discriminative. Besides, we introduce global hard identity searching method to sample hard identities when generating a training batch. Extensive experiments on Market-1501, CUHK03, and DukeMTMCreID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods.
Strictly speaking, triplet loss was first introduced by @cite_23 . They trained the metric with the goal that the k-nearest neighbors belong to the same class and examples of different classes can be dissociated by a large margin. Based on this work, @cite_5 improved the loss to learn a unified embedding for face recognition. They pushed forward the concept of triplet and minimized the distance between an anchor and a positive while maximized the distance between the anchor and a negative. @cite_13 improved the triplet loss function by restricting positive pairs within a small distance. And this improved loss was used to train a multi-channel parts-based convolutional neural network model. Recently, @cite_30 summarized the works of ReID using triplet loss, and proposed some training strategies to improve the performance of triplet loss. While our work is also based on triplet loss, we investigate the influence of the margin, which has received little attention so far.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_13", "@cite_23" ], "mid": [ "2598634450", "2096733369", "", "2106053110" ], "abstract": [ "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner." ] }
1812.06576
2949720626
Person re-identification (ReID) aims to match people across multiple non-overlapping video cameras deployed at different locations. To address this challenging problem, many metric learning approaches have been proposed, among which triplet loss is one of the state-of-the-arts. In this work, we explore the margin between positive and negative pairs of triplets and prove that large margin is beneficial. In particular, we propose a novel multi-stage training strategy which learns incremental triplet margin and improves triplet loss effectively. Multiple levels of feature maps are exploited to make the learned features more discriminative. Besides, we introduce global hard identity searching method to sample hard identities when generating a training batch. Extensive experiments on Market-1501, CUHK03, and DukeMTMCreID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods.
Hard example mining has been widely exploited to assist training of deep neural networks. @cite_33 proposed online hard example mining to improve the performance of object detection. @cite_30 extended this idea and selected the hardest positive and negative samples within a batch when generating triplets. These methods can be categorized as hard example mining considering that hard examples are mined from a training batch instead of the whole training set. While the proposed GHIS searches hard identities from all identities in the training set.
{ "cite_N": [ "@cite_30", "@cite_33" ], "mid": [ "2598634450", "2341497066" ], "abstract": [ "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.", "The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively." ] }
1812.06635
2950469992
In this paper, we propose a way to combine two acceleration techniques for the @math -regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms, and the use of fast structured approximations of the dictionary matrix. To do so, we introduce a new family of screening tests, termed stable screening, which can cope with approximation errors on the dictionary atoms while keeping the safety of the test (i.e. zero risk of rejecting atoms belonging to the solution support). Some of the main existing screening tests are extended to this new framework. The proposed algorithm consists in using a coarser (but faster) approximation of the dictionary at the initial iterations and then switching to better approximations until eventually adopting the original dictionary. A systematic switching criterion based on the duality gap saturation and the screening ratio is derived.Simulation results show significant reductions in both computational complexity and execution times for a wide range of tested scenarios.
Apart from the aforementioned structured dictionaries and safe screening tests --as well as other preceding correlation-based feature selection heuristics @cite_34 @cite_37 -- some related acceleration strategies for sparsity-inducing optimization problems (or more specifically for problem ) are worth citing.
{ "cite_N": [ "@cite_37", "@cite_34" ], "mid": [ "2131060185", "2154560360" ], "abstract": [ "Summary. We consider rules for discarding predictors in lasso regression and related problems, for computational efficiency. El Ghaoui and his colleagues have proposed ‘SAFE’ rules, based on univariate inner products between each predictor and the outcome, which guarantee that a coefficient will be 0 in the solution vector. This provides a reduction in the number of variables that need to be entered into the optimization. We propose strong rules that are very simple and yet screen out far more predictors than the SAFE rules. This great practical improvement comes at a price: the strong rules are not foolproof and can mistakenly discard active predictors, i.e. predictors that have non-zero coefficients in the solution. We therefore combine them with simple checks of the Karush–Kuhn–Tucker conditions to ensure that the exact solution to the convex problem is delivered. Of course, any (approximate) screening method can be combined with the Karush–Kuhn–Tucker conditions to ensure the exact solution; the strength of the strong rules lies in the fact that, in practice, they discard a very large number of the inactive predictors and almost never commit mistakes. We also derive conditions under which they are foolproof. Strong rules provide substantial savings in computational time for a variety of statistical optimization problems.", "Summary. Variable selection plays an important role in high dimensional statistical modelling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, accuracy of estimation and computational cost are two top concerns. Recently, Candes and Tao have proposed the Dantzig selector using L1‐regularization and showed that it achieves the ideal risk up to a logarithmic factor log (p). Their innovative procedure and remarkable result are challenged when the dimensionality is ultrahigh as the factor log (p) can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size. In a fairly general asymptotic framework, correlation learning is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, iterative sure independence screening is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be accomplished by a well‐developed method such as smoothly clipped absolute deviation, the Dantzig selector, lasso or adaptive lasso. The connections between these penalized least squares methods are also elucidated." ] }
1812.06635
2950469992
In this paper, we propose a way to combine two acceleration techniques for the @math -regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms, and the use of fast structured approximations of the dictionary matrix. To do so, we introduce a new family of screening tests, termed stable screening, which can cope with approximation errors on the dictionary atoms while keeping the safety of the test (i.e. zero risk of rejecting atoms belonging to the solution support). Some of the main existing screening tests are extended to this new framework. The proposed algorithm consists in using a coarser (but faster) approximation of the dictionary at the initial iterations and then switching to better approximations until eventually adopting the original dictionary. A systematic switching criterion based on the duality gap saturation and the screening ratio is derived.Simulation results show significant reductions in both computational complexity and execution times for a wide range of tested scenarios.
Instead of starting from the full problem and pruning the feature set, working set techniques @cite_27 @cite_17 start with small restricted problems and progressively include more promising features. In @cite_33 , the authors combine a working set strategy with safe screening and in @cite_21 they incorporate a dual extrapolation technique to further enhance the screening performance and accelerate convergence. This idea is conceivably complementary to the techniques proposed here.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_33", "@cite_17" ], "mid": [ "2497464837", "2963653702", "2604483902", "1578527825" ], "abstract": [ "In this paper, we investigate new active-settype methods for l1-regularized linear regression that overcome some difficulties of existing active set methods. By showing a relationship between l1-regularized linear regression and the linear complementarity problem with bounds, we present a fast active-set-type method, called block principal pivoting. This method accelerates computation by allowing exchanges of several variables among working sets. We further provide an improvement of this method, discuss its properties, and also explain a connection to the structure learning of Gaussian graphical models. Experimental comparisons on synthetic and real data sets show that the proposed method is significantly faster than existing active set methods and competitive against recently developed iterative methods.", "Convex sparsity-inducing regularizations are ubiquitous in high-dimensional machine learning, but solving the resulting optimization problems can be slow. To accelerate solvers, state-of-the-art approaches consist in reducing the size of the optimization problem at hand. In the context of regression, this can be achieved either by discarding irrelevant features (screening techniques) or by prioritizing features likely to be included in the support of the solution (working set techniques). Duality comes into play at several steps in these techniques. Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points. This enables a tighter control of op-timality as used in stopping criterion, as well as better screening performance of Gap Safe rules. Finally, we propose a working set strategy based on an aggressive use of Gap Safe screening rules. Thanks to our new dual point construction, we show significant computational speedups on multiple real-world problems.", "Convex sparsity-promoting regularizations are ubiquitous in modern statistical learning. By construction, they yield solutions with few non-zero coefficients, which correspond to saturated constraints in the dual optimization formulation. Working set (WS) strategies are generic optimization techniques that consist in solving simpler problems that only consider a subset of constraints, whose indices form the WS. Working set methods therefore involve two nested iterations: the outer loop corresponds to the definition of the WS and the inner loop calls a solver for the subproblems. For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups. In practice, WS are generally small in this context so the associated feature Gram matrix can fit in memory. Here we show that the Gauss-Southwell rule (a greedy strategy for block coordinate descent techniques) leads to fast solvers in this case. Combined with a working set strategy based on an aggressive use of so-called Gap Safe screening rules, we propose a solver achieving state-of-the-art performance on sparse learning problems. Results are presented on Lasso and multi-task Lasso estimators.", "By reducing optimization to a sequence of small subproblems, working set methods achieve fast convergence times for many challenging problems. Despite excellent performance, theoretical understanding of working sets is limited, and implementations often resort to heuristics to determine subproblem size, makeup, and stopping criteria. We propose BLITZ, a fast working set algorithm accompanied by useful guarantees. Making no assumptions on data, our theory relates subproblem size to progress toward convergence. This result motivates methods for optimizing algorithmic parameters and discarding irrelevant variables as iterations progress. Applied to l1-regularized learning, BLITZ convincingly outperforms existing solvers in sequential, limited-memory, and distributed settings. BLITZ is not specific to l1-regularized learning, making the algorithm relevant to many applications involving sparsity or constraints." ] }
1812.06635
2950469992
In this paper, we propose a way to combine two acceleration techniques for the @math -regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms, and the use of fast structured approximations of the dictionary matrix. To do so, we introduce a new family of screening tests, termed stable screening, which can cope with approximation errors on the dictionary atoms while keeping the safety of the test (i.e. zero risk of rejecting atoms belonging to the solution support). Some of the main existing screening tests are extended to this new framework. The proposed algorithm consists in using a coarser (but faster) approximation of the dictionary at the initial iterations and then switching to better approximations until eventually adopting the original dictionary. A systematic switching criterion based on the duality gap saturation and the screening ratio is derived.Simulation results show significant reductions in both computational complexity and execution times for a wide range of tested scenarios.
Joint screening @cite_32 allows to screen many atoms which lie close together in one single test, reducing the number of required tests for a given dictionary. Interestingly, the resulting tests share many similarities and mathematical connections to the stable screening tests introduced here, despite arising from an essentially different premise.
{ "cite_N": [ "@cite_32" ], "mid": [ "2963552869" ], "abstract": [ "This paper focusses on “safe” screening techniques for the LASSO problem. Motivated by the need for low-complexity algorithms, we propose a new approach, dubbed “joint screening test”, allowing to screen a set of atoms by carrying out one single test. The approach is particularized to two different sets of atoms, respectively expressed as sphere and dome regions. After presenting the mathematical derivations of the tests, we elaborate on their relative effectiveness and discuss the practical use of such procedures." ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
Indoor scene understanding has been a popular topic and accumulated rich literature in the past decades. We review the most relevant works and refer readers to the survey @cite_25 to have an overview.
{ "cite_N": [ "@cite_25" ], "mid": [ "2792893928" ], "abstract": [ "With the availability of low-cost and compact 2.5 3D visual sensing devices, computer vision community is experiencing a growing interest in visual scene understanding. This survey paper provides a comprehensive background to this research topic. We begin with a historical perspective, followed by popular 3D data representations and a comparative analysis of available datasets. Before delving into the application specific details, this survey provides a succinct introduction to the core technologies that are the underlying methods extensively used in the literature. Afterwards, we review the developed techniques according to a taxonomy based on the scene understanding tasks. This covers holistic indoor scene understanding as well as subtasks such as scene classification, object detection, pose estimation, semantic segmentation, 3D reconstruction, saliency detection, physics-based reasoning and affordance prediction. Later on, we summarize the performance metrics used for evaluation in different tasks and a quantitative comparison among the recent state-of-the-art techniques. We conclude this review with the current challenges and an outlook towards the open research problems requiring further investigation." ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
3D Reconstruction A number of simultaneous localization and mapping (SLAM) techniques are widely used to model 3D scenes using a RGB-D sensor. Some typical works include Kinect Fusion @cite_15 , Voxel Hashing @cite_33 , Elastic Fusion @cite_17 , Bundle Fusion @cite_34 , ORB-SLAM @cite_24 , SVO @cite_23 , DSO @cite_11 and so on. These methods are effective for 3D scanning tasks such as scene reconstruction and accurate camera estimation. But when it comes to scenes without enough matched overlaps, these algorithms are likely to fail or exhibit unacceptable inaccuracies.
{ "cite_N": [ "@cite_11", "@cite_33", "@cite_24", "@cite_23", "@cite_15", "@cite_34", "@cite_17" ], "mid": [ "2474281075", "2071906076", "2535547924", "", "1987648924", "2336961836", "2527142681" ], "abstract": [ "Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.", "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "", "We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.", "Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results but suffer from (1) needing minutes to perform online correction, preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation, resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real time to ensure global consistency, all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.1", "We present a novel approach to real-time dense visual simultaneous localisation and mapping. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimization or any post-processing steps. This is accomplished by using dense frame-to-model camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimizations as often as possible to stay close to the mode of the map distribution, while utilizing global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoo..." ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
Works on room layout estimation via a single image @cite_8 @cite_10 @cite_0 @cite_41 @cite_5 have been continuously developed which enhance indoor scene analysis and understanding. Due to the limitation of the narrow field-of-view caused by a single standard image, researchers have tried to exploit panoramic images @cite_26 @cite_31 @cite_44 to recover the whole room context. Recently, with the success of deep learning in various vision tasks, newest techniques @cite_37 @cite_32 rely on convolutional neural networks to map an RGB image to a room layout directly. These methods using standard or panoramic RGB images are highly dependent on feature points either for key structure detection or for pose estimation. Because of the instability of image feature points (e.g. occlusion and blur), these methods will suffer from inaccuracies as well as incapability of handling complex (they usually recover cuboid'' or L'' shape @cite_37 ) and featureless scenes. Instead, our method uses depth data and is independent of feature points to avoid these drawbacks.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_8", "@cite_41", "@cite_32", "@cite_0", "@cite_44", "@cite_5", "@cite_31", "@cite_10" ], "mid": [ "2598108580", "566730006", "2116851763", "", "", "2534523274", "", "2113107168", "", "" ], "abstract": [ "This paper focuses on the task of room layout estimation from a monocular RGB image. Prior works break the problem into two sub-tasks: semantic segmentation of floor, walls, ceiling to produce layout hypotheses, followed by an iterative optimization step to rank these hypotheses. In contrast, we adopt a more direct formulation of this problem as one of estimating an ordered set of room layout keypoints. The room layout and the corresponding segmentation is completely specified given the locations of these ordered keypoints. We predict the locations of the room layout keypoints using RoomNet, an end-to-end trainable encoder-decoder network. On the challenging benchmark datasets Hedau and LSUN, we achieve state-of-the-art performance along with 200× to 600× speedup compared to the most recent work. Additionally, we present optional extensions to the RoomNet architecture such as including recurrent computations and memory units to refine the keypoint locations under the same parametric capacity.", "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360° full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.", "We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation.", "", "", "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "", "In this paper we propose an approach to jointly infer the room layout as well as the objects present in the scene. Towards this goal, we propose a branch and bound algorithm which is guaranteed to retrieve the global optimum of the joint problem. The main difficulty resides in taking into account occlusion in order to not over-count the evidence. We introduce a new decomposition method, which generalizes integral geometry to triangular shapes, and allows us to bound the different terms in constant time. We exploit both geometric cues and object detectors as image features and show large improvements in 2D and 3D object detection over state-of-the-art deformable part-based models.", "", "" ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
RGB-D images include 3D range information of each pixel, thus significantly improving the accuracy and the robustness of geometry reasoning. Some methods use a single RGB-D image @cite_12 @cite_13 to estimate a room layout, which is also limited by the narrow field-of-view. With the superiority of panoramic RGB-D images, higher quality layout analysis and structured modeling results have been achieved @cite_38 @cite_18 . There are also a few methods using densely scanned point clouds as input to estimate room layouts @cite_4 @cite_3 @cite_45 . Most of these methods tackle a complete scene in order to exploit the loop closure property of layouts, while our method is able to cope with the more challenging partial scans which lack a clear outer boundary.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_3", "@cite_45", "@cite_13", "@cite_12" ], "mid": [ "2201056710", "", "2775610217", "2609058327", "2795586939", "", "2296672305" ], "abstract": [ "This paper presents a novel 3D modeling framework that reconstructs an indoor scene as a structured model from panorama RGBD images. A scene geometry is represented as a graph, where nodes correspond to structural elements such as rooms, walls, and objects. The approach devises a structure grammar that defines how a scene graph can be manipulated. The grammar then drives a principled new reconstruction algorithm, where the grammar rules are sequentially applied to recover a structured model. The paper also proposes a new room segmentation algorithm and an offset-map reconstruction algorithm that are used in the framework and can enforce architectural shape priors far beyond existing state-of-the-art. The structured scene representation enables a variety of novel applications, ranging from indoor scene visualization, automated floorplan generation, Inverse-CAD, and more. We have tested our framework and algorithms on six synthetic and five real datasets with qualitative and quantitative evaluations. The source code and the data are available at the project website [15].", "", "We present a system to generate building information models (BIMs) of house interiors from 3D scans. The strength of our approach is its simplicity and low runtime which allows for mobile processing applications. We consider scans of single floor, Manhattan-like indoor scenes for which our method creates metric room layouts by detecting walls and performing a subsequent reasoning about their neighborhood relations. The output of our method is a 3D BIM with hierarchical semantic annotations for individual rooms being refined by walls, ceilings, floors and doors. A variety of experiments demonstrate the effectiveness of our approach. Our reconstruction results compare well to other state-of-art methods in both reconstruction quality as well as runtime.", "In this paper, we propose a novel method to jointly solve scene layout estimation and global registration problems for accurate indoor 3D reconstruction. Given a sequence of range data, we first build a set of scene fragments using KinectFusion and register them through pose graph optimization. Afterwards, we alternate between layout estimation and layout-based global registration processes in iterative fashion to complement each other. We extract the scene layout through hierarchical agglomerative clustering and energy-based multi-model fitting in consideration of noisy measurements. Having the estimated scene layout in one hand, we register all the range data through the global iterative closest point algorithm where the positions of 3D points that belong to the layout such as walls and a ceiling are constrained to be close to the layout. We experimentally verify the proposed method with the publicly available synthetic and real-world datasets in both quantitative and qualitative ways.", "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at this https URL", "", "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming." ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
Indoor Scene Constraints Intrinsic properties of indoor scenes are widely used in indoor understanding and reconstruction. Manhattan World (MW) assumption is the predominant rule, thus Manhattan frame estimation is well researched for both RGB @cite_8 @cite_28 and RGB-D images @cite_42 @cite_27 . MW is widely used as a guidance in many applications such as layout estimation @cite_8 @cite_10 @cite_0 @cite_41 @cite_5 @cite_44 , camera pose estimation @cite_29 and reconstruction refinement @cite_19 @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_28", "@cite_41", "@cite_29", "@cite_42", "@cite_0", "@cite_44", "@cite_27", "@cite_19", "@cite_5", "@cite_10" ], "mid": [ "", "2116851763", "1965834447", "", "", "1955995711", "2534523274", "", "2474015332", "2963939259", "2113107168", "" ], "abstract": [ "", "We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation.", "Existing approaches to indoor scene understanding formulate the problem as a structured prediction task focusing on estimating the 3D bounding box which best describes the scene layout. Unfortunately, these approaches utilize high order potentials which are computationally intractable and rely on ad-hoc approximations for both learning and inference. In this paper we show that the potentials commonly used in the literature can be decomposed into pair-wise potentials by extending the concept of integral images to geometry. As a consequence no heuristic reduction of the search space is required. In practice, this results in large improvements in performance over the state-of-the-art, while being orders of magnitude faster.", "", "", "This paper proposes a new framework for estimating the Manhattan Frame (MF) of an indoor scene from a single RGB-D image. Our technique formulates this problem as the estimation of a rotation matrix that best aligns the normals of the captured scene to a canonical world axes. By introducing sparsity constraints, our method can simultaneously estimate the scene MF, the surfaces in the scene that are best aligned to one of three coordinate axes, and the outlier surfaces that do not align with any of the axes. To test our approach, we contribute a new set of annotations to determine ground truth MFs in each image of the popular NYUv2 dataset. We use this new benchmark to experimentally demonstrate that our method is more accurate, faster, more reliable and more robust than the methods used in the literature. We further motivate our technique by showing how it can be used to address the RGB-D SLAM problem in indoor scenes by incorporating it into and improving the performance of a popular RGB-D SLAM method.", "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "", "Given a set of surface normals, we pose a Manhattan Frame (MF) estimation problem as a consensus set maximization that maximizes the number of inliers over the rotation search space. We solve this problem through a branchand-bound framework, which mathematically guarantees a globally optimal solution. However, the computational time of conventional branch-and-bound algorithms are intractable for real-time performance. In this paper, we propose a novel bound computation method within an efficient measurement domain for MF estimation, i.e., the extended Gaussian image (EGI). By relaxing the original problem, we can compute the bounds in real-time, while preserving global optimality. Furthermore, we quantitatively and qualitatively demonstrate the performance of the proposed method for synthetic and real-world data. We also show the versatility of our approach through two applications: extension to multiple MF estimation and video stabilization.", "RGB-D scanning of indoor environments is important for many applications, including real estate, interior design, and virtual reality. However, it is still challenging to register RGB-D images from a hand-held camera over a long video sequence into a globally consistent 3D model. Current methods often can lose tracking or drift and thus fail to reconstruct salient structures in large environments (e.g., parallel walls in different rooms). To address this problem, we propose a fine-to-coarse global registration algorithm that leverages robust registrations at finer scales to seed detection and enforcement of new correspondence and structural constraints at coarser scales. To test global registration algorithms, we provide a benchmark with 10,401 manually-clicked point correspondences in 25 scenes from the SUN3D dataset. During experiments with this benchmark, we find that our fine-to-coarse algorithm registers long RGB-D sequences better than previous methods.", "In this paper we propose an approach to jointly infer the room layout as well as the objects present in the scene. Towards this goal, we propose a branch and bound algorithm which is guaranteed to retrieve the global optimum of the joint problem. The main difficulty resides in taking into account occlusion in order to not over-count the evidence. We introduce a new decomposition method, which generalizes integral geometry to triangular shapes, and allows us to bound the different terms in constant time. We exploit both geometric cues and object detectors as image features and show large improvements in 2D and 3D object detection over state-of-the-art deformable part-based models.", "" ] }
1812.06677
2904325312
We present a novel approach to reconstruct large or featureless scenes. Our method jointly estimates camera poses and a room layout from a set of partial reconstructions due to camera tracking interruptions when scanning a large or featureless scene. Unlike the existing methods relying on feature point matching to localize the camera, we exploit the 3D "box" structure of a typical room layout that meets the Manhattan World property. We first estimate a local layout for each partial scan separately and then combine these local layouts to form a globally aligned layout with loop closure. We validate our method quantitatively and qualitatively on real and synthetic scenes of various sizes and complexities. The evaluations and comparisons show superior effectiveness and accuracy of our method.
In addition to the MW assumption, indoor scenes have plentiful lines and planes which provide strong cues for many tasks. Elqursh and Elgammal @cite_20 present a line-based camera pose estimation method, while Koch al @cite_7 use 3D line segments to align the non-overlapping indoor and outdoor reconstructions. Planar patch detection and matching @cite_40 @cite_21 @cite_36 @cite_2 @cite_6 @cite_43 @cite_9 are extensively used strategies to improve the reconstruction quality. Some works @cite_40 @cite_21 @cite_36 @cite_2 exploit plane correspondence to solve for frame-to-frame camera poses. Shi al @cite_1 use a CNN to learn a feature descriptor for planar patches in RGB-D images. Halber al @cite_19 and Lee al @cite_3 perform global registration leveraging structural constraints to elevate the scan accuracy. These approaches all hinge on the success of feature matching at the overlapping areas, as opposed to the scenario in this paper.
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_3", "@cite_43", "@cite_40", "@cite_19", "@cite_2", "@cite_20" ], "mid": [ "2484731041", "", "", "", "2789805612", "", "2609058327", "", "2769022369", "2963939259", "", "2008471589" ], "abstract": [ "This paper presents an approach for automatically aligning the non-overlapping interior and exterior parts of a 3D building model computed from image based 3D reconstructions. We propose a method to align the 3D reconstructions by identifying corresponding 3D structures that are part of the interior and exterior model (e.g. openings like windows). In this context, we point out the potential of using 3D line segments to enrich the information of point clouds generated by SfMs and show how this can be used for interpreting the scene and matching individual reconstructions.", "", "", "", "We introduce a novel RGB-D patch descriptor designed for detecting coplanar surfaces in SLAM reconstruction. The core of our method is a deep convolutional neural net that takes in RGB, depth, and normal information of a planar patch in an image and outputs a descriptor that can be used to find coplanar patches from other images.We train the network on 10 million triplets of coplanar and non-coplanar patches, and evaluate on a new coplanarity benchmark created from commodity RGB-D scans. Experiments show that our learned descriptor outperforms alternatives extended for this new task by a significant margin. In addition, we demonstrate the benefits of coplanarity matching in a robust RGBD reconstruction formulation.We find that coplanarity constraints detected with our method are sufficient to get reconstruction results comparable to state-of-the-art frameworks on most scenes, but outperform other methods on standard benchmarks when combined with a simple keypoint method.", "", "In this paper, we propose a novel method to jointly solve scene layout estimation and global registration problems for accurate indoor 3D reconstruction. Given a sequence of range data, we first build a set of scene fragments using KinectFusion and register them through pose graph optimization. Afterwards, we alternate between layout estimation and layout-based global registration processes in iterative fashion to complement each other. We extract the scene layout through hierarchical agglomerative clustering and energy-based multi-model fitting in consideration of noisy measurements. Having the estimated scene layout in one hand, we register all the range data through the global iterative closest point algorithm where the positions of 3D points that belong to the layout such as walls and a ceiling are constrained to be close to the layout. We experimentally verify the proposed method with the publicly available synthetic and real-world datasets in both quantitative and qualitative ways.", "", "We present 3DLite1, a novel approach to reconstruct 3D environments using consumer RGB-D sensors, making a step towards directly utilizing captured 3D content in graphics applications, such as video games, VR, or AR. Rather than reconstructing an accurate one-to-one representation of the real world, our method computes a lightweight, low-polygonal geometric abstraction of the scanned geometry. We argue that for many graphics applications it is much more important to obtain high-quality surface textures rather than highly-detailed geometry. To this end, we compensate for motion blur, auto-exposure artifacts, and micro-misalignments in camera poses by warping and stitching image fragments from low-quality RGB input data to achieve high-resolution, sharp surface textures. In addition to the observed regions of a scene, we extrapolate the scene geometry, as well as the mapped surface textures, to obtain a complete 3D model of the environment. We show that a simple planar abstraction of the scene geometry is ideally suited for this completion task, enabling 3DLite to produce complete, lightweight, and visually compelling 3D scene models. We believe that these CAD-like reconstructions are an important step towards leveraging RGB-D scanning in actual content creation pipelines.", "RGB-D scanning of indoor environments is important for many applications, including real estate, interior design, and virtual reality. However, it is still challenging to register RGB-D images from a hand-held camera over a long video sequence into a globally consistent 3D model. Current methods often can lose tracking or drift and thus fail to reconstruct salient structures in large environments (e.g., parallel walls in different rooms). To address this problem, we propose a fine-to-coarse global registration algorithm that leverages robust registrations at finer scales to seed detection and enforcement of new correspondence and structural constraints at coarser scales. To test global registration algorithms, we provide a benchmark with 10,401 manually-clicked point correspondences in 25 scenes from the SUN3D dataset. During experiments with this benchmark, we find that our fine-to-coarse algorithm registers long RGB-D sequences better than previous methods.", "", "We present an algorithm for calibrated camera relative pose estimation from lines. Given three lines with two of the lines parallel and orthogonal to the third we can compute the relative rotation between two images. We can also compute the relative translation from two intersection points. We also present a framework in which such lines can be detected. We evaluate the performance of the algorithm using synthetic and real data. The intended use of the algorithm is with robust hypothesize-and-test frameworks such as RANSAC. Our approach is suitable for urban and indoor environments where most lines are either parallel or orthogonal to each other." ] }
1812.06698
2953271208
In modern election campaigns, political parties utilize social media to advertise their policies and candidates and to communicate to the electorate. In Japan's latest general election in 2017, the 48th general election for the Lower House, social media, especially Twitter, was actively used. In this paper, we analyze the users who retweeted tweets of political parties on Twitter during the election. Our aim is to clarify what kinds of users are diffusing (retweeting) tweets of political parties. The results indicate that the characteristics of retweeters of the largest ruling party (Liberal Democratic Party of Japan) and the largest opposition party (The Constitutional Democratic Party of Japan) were similar, even though the retweeters did not overlap each other. We also found that a particular opposition party (Japanese Communist Party) had quite different characteristics from other political parties.
The number of followers a user has is used as the attention degree of the user. But the influence of the number of followers in information diffusion is not necessarily large @cite_0 , and fraudulent methods are sometimes used to gain followers @cite_3 . Even in political communication, the hub of information cannot be determined only by the number of followers @cite_8 . On the other hand, few studies have analyzed the tweet strategy of each party based on users' tweet characteristics.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_8" ], "mid": [ "1814023381", "2011366667", "" ], "abstract": [ "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user.", "The users of microblogging services, such as Twitter, use the count of followers of an account as a measure of its reputation or influence. For those unwilling or unable to attract followers naturally, a growing industry of \"Twitter follower markets\" provides followers for sale. Some markets use fake accounts to boost the follower count of their customers, while others rely on a pyramid scheme to turn non-paying customers into followers for each other, and into followers for paying customers. In this paper, we present a detailed study of Twitter follower markets, report in detail on both the static and dynamic properties of customers of these markets, and develop and evaluate multiple techniques for detecting these activities. We show that our detection system is robust and reliable, and can detect a significant number of customers in the wild.", "" ] }
1812.06587
2904946678
Video description is one of the most challenging problems in vision and language understanding due to the large variability both on the video and language side. Models, hence, typically shortcut the difficulty in recognition and generate plausible sentences that are based on priors but are not necessarily grounded in the video. In this work, we explicitly link the sentence to the evidence in the video by annotating each noun phrase in a sentence with the corresponding bounding box in one of the frames of a video. Our dataset, ActivityNet-Entities, augments the challenging ActivityNet Captions dataset with 158k bounding box annotations, each grounding a noun phrase. This allows training video description models with this data, and importantly, evaluate how grounded or "true" such model are to the video they describe. To generate grounded captions, we propose a novel video description model which is able to exploit these bounding box annotations. We demonstrate the effectiveness of our model on our dataset, but also show how it can be applied to image description on the Flickr30k Entities dataset. We achieve state-of-the-art performance on video description, video paragraph description, and image description and demonstrate our generated sentences are better grounded in the video.
Attention Supervision. As fine-grained grounding becomes a potential incentive for next-generation vision-language systems, to what degree it can benefit remains an open question. On one hand, for VQA @cite_25 @cite_13 the authors point out that the attention model does not attend to same regions as humans and adding attention supervision barely helps the performance. On the other hand, adding supervision to feature map attention @cite_3 @cite_19 was found to be beneficial. We noticed in our preliminary experiments that directly guiding the region attention with supervision @cite_49 does not necessary lead to improvements in automatic sentence metrics. We hypothesize that this might be due to the lack of object context information and we thus introduce a self-attention @cite_23 based context encoding in our attention model, which allows message passing across all regions in the sampled video frames.
{ "cite_N": [ "@cite_3", "@cite_19", "@cite_49", "@cite_23", "@cite_13", "@cite_25" ], "mid": [ "2410323755", "", "2795151422", "2626778328", "2949991003", "2883092128" ], "abstract": [ "Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision. But despite their popularity, the \"correctness\" of the implicitly-learned attention maps has only been assessed qualitatively by visualization of several examples. In this paper we focus on evaluating and improving the correctness of attention in neural image captioning models. Specifically, we propose a quantitative evaluation metric for the consistency between the generated attention maps and human annotations, using recently released datasets with alignment between regions in images and entities in captions. We then propose novel models with different levels of explicit supervision for learning attention maps during training. The supervision can be strong when alignment between regions and caption entities are available, or weak when only object segments and categories are provided. We show on the popular Flickr30k and COCO datasets that introducing supervision of attention maps during training solidly improves both attention correctness and caption quality, showing the promise of making machine perception more human-like.", "", "We introduce a novel framework for image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image. Our approach reconciles classical slot filling approaches (that are generally better grounded in images) with modern neural captioning approaches (that are generally more natural sounding and accurate). Our approach first generates a sentence 'template' with slot locations explicitly tied to specific image regions. These slots are then filled in by visual concepts identified in the regions by object detectors. The entire architecture (sentence template generation and slot filling with object detectors) is end-to-end differentiable. We verify the effectiveness of our proposed model on different image captioning tasks. On standard image captioning and novel object captioning, our model reaches state-of-the-art on both COCO and Flickr30k datasets. We also demonstrate that our model has unique advantages when the train and test distributions of scene compositions - and hence language priors of associated captions - are different. Code has been made available at: https: github.com jiasenlu NeuralBabyTalk.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.", "A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations specific for visual grounding is difficult and expensive. In this work, we demonstrate that we can effectively train a VQA architecture with grounding supervision that can be automatically obtained from available region descriptions and object annotations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA accuracy.", "Abstract We conduct large-scale studies on ‘human attention’ in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans. Finally, we train VQA models with explicit attention supervision, and find that it improves VQA performance." ] }
1812.06553
2905548585
Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. This includes tremendous volumes of bulk multicast traffic generated as a result of data and content replication. Although one can perform these transfers using a single multicast forwarding tree, that can lead to poor performance as the slowest receiver on each tree dictates the completion time for all receivers. Using multiple trees per transfer each connected to a subset of receivers alleviates this concern. The choice of multicast trees also determines the total bandwidth usage. To further improve the performance, bandwidth over dedicated inter-datacenter networks can be carved for different multicast trees over specific time periods to avoid congestion and minimize the average receiver completion times. In this paper, we break this problem into the three sub-problems of partitioning, tree selection, and rate allocation. We present an algorithm called QuickCast which is computationally fast and allows us to significantly speed up multiple receivers per bulk multicast transfer with control over extra bandwidth consumption. We evaluate QuickCast against a variety of synthetic and real traffic patterns as well as real WAN topologies. Compared to performing bulk multicast transfers as separate unicast transfers, QuickCast achieves up to @math reduction in mean completion times while at the same time using @math the bandwidth. Also, QuickCast allows the top @math of receivers to complete between @math to @math faster on average compared with when a single forwarding multicast tree is used for data delivery.
A large body of general multicasting approaches have been proposed where receivers can join multicast groups anytime to receive required data and multicast trees are incrementally built and pruned as nodes join or leave a multicast session such as IP multicasting @cite_44 , TCP-SMO @cite_58 and NORM @cite_19 . These solutions focus on building and maintaining multicast trees, and do not consider link capacity and other ongoing multicast flows while building the trees.
{ "cite_N": [ "@cite_44", "@cite_19", "@cite_58" ], "mid": [ "", "2240045799", "2162111441" ], "abstract": [ "", "This document describes the messages and procedures of the Negative- ACKnowledgment (NACK) Oriented Reliable Multicast (NORM) Protocol. This protocol is designed to provide end-to-end reliable transport of bulk data objects or streams over generic IP multicast routing and forwarding services. NORM uses a selective, negative acknowledgment mechanism for transport reliability and offers additional protocol mechanisms to allow for operation with minimal a priori coordination among senders and receivers. A congestion control scheme is specified to allow the NORM protocol to fairly share available network bandwidth with other transport protocols such as Transmission Control Protocol (TCP). It is capable of operating with both reciprocal multicast routing among senders and receivers and with asymmetric connectivity (possibly a unicast return path) between the senders and receivers. The protocol offers a number of features to allow different types of applications or possibly other higher level transport protocols to utilize its service in different ways. The protocol leverages the use of FEC-based repair and other IETF reliable multicast transport (RMT) building blocks in its design.", "Scalable reliable multicast protocols have been the focus of recent research, tackling the problem of efficient reliable data delivery to an arbitrarily large number of receivers. Yet, the common applications of multicast, such as multi-point file delivery, or video streaming from a media server, typically only involve a moderate number of receivers, such as a thousand or fewer. Moreover, because of the limited deployment of these specialized multicast protocols, it is common, when feasible, for applications to use multiple TCP connections instead, one for each receiver, to implement multi-point delivery, causing a significant demand on the transmission server and the downstream links. We describe a multicast extension to TCP, called single-source multicast optimization (SMO), that optimizes this case of multipoint delivery, providing the benefits of multicast together with the familiar features and API of TCP. Our results from experiments based on a Linux implementation and performed on a testbed show that TCP-SMO requires just a modest extension to the TCP implementation and provides scalable performance of multicast up to over a thousand receivers, thereby satisfying the common case requirements. In addition, used with TCP-RTM (real-time mode), TCP-SMO also supports real-time multimedia multicast applications well." ] }
1812.06553
2905548585
Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. This includes tremendous volumes of bulk multicast traffic generated as a result of data and content replication. Although one can perform these transfers using a single multicast forwarding tree, that can lead to poor performance as the slowest receiver on each tree dictates the completion time for all receivers. Using multiple trees per transfer each connected to a subset of receivers alleviates this concern. The choice of multicast trees also determines the total bandwidth usage. To further improve the performance, bandwidth over dedicated inter-datacenter networks can be carved for different multicast trees over specific time periods to avoid congestion and minimize the average receiver completion times. In this paper, we break this problem into the three sub-problems of partitioning, tree selection, and rate allocation. We present an algorithm called QuickCast which is computationally fast and allows us to significantly speed up multiple receivers per bulk multicast transfer with control over extra bandwidth consumption. We evaluate QuickCast against a variety of synthetic and real traffic patterns as well as real WAN topologies. Compared to performing bulk multicast transfers as separate unicast transfers, QuickCast achieves up to @math reduction in mean completion times while at the same time using @math the bandwidth. Also, QuickCast allows the top @math of receivers to complete between @math to @math faster on average compared with when a single forwarding multicast tree is used for data delivery.
A variety of solutions have been proposed for minimizing congestion across the intra-datacenter network by selecting multicast trees according to link utilization. Datacast @cite_40 sends data over edge-disjoint Steiner trees found by pruning spanning trees over various topologies of FatTree, BCube, and Torus. AvRA @cite_63 focuses on tree and FatTree topologies and builds minimum edge Steiner trees that connect the sender to all receivers as they join. MCTCP @cite_57 reactively schedules flows according to link utilization. These works do not aim at minimizing the completion times of receivers and ignore the total bandwidth consumption.
{ "cite_N": [ "@cite_57", "@cite_40", "@cite_63" ], "mid": [ "2537090207", "", "2037572363" ], "abstract": [ "Continuously enriched distributed systems in data centers generate much network traffic in push-style one-to-many group mode, raising new requirements for multicast transport in terms of efficiency and robustness. Existing reliable multicast solutions, which suffer from low robustness and inefficiency in either host-side protocols or multicast routing, are not suitable for data centers. In order to address the problems of inefficiency and low robustness, we present a sender-initiated, efficient, congestion-aware and robust reliable multicast solution mainly for small groups in SDN-based data centers, called MCTCP. The main idea behind MCTCP is to manage the multicast groups in a centralized manner, and reactively schedule multicast flows to active and low-utilized links, by extending TCP as the host-side protocol and managing multicast groups in the SDN-controller. The multicast spanning trees are calculated and adjusted according to the network status to perform a better allocation of resources. Our experiments show that, MCTCP can dynamically bypass the congested and failing links, achieving high efficiency and robustness. As a result, MCTCP outperforms the state-of-the-art reliable multicast schemes. Moreover, MCTCP improves the performance of data replication in HDFS compared with the original and TCP-SMO based ones, e.g., achieves 101 and 50 improvements in terms of bandwidth, respectively.", "", "Many existing data center network (DCN) flow scheduling schemes minimize flow completion times (FCT) based on prior knowledge of flows and custom switch designs, making them hard to use in practice. This paper introduces, Pias, a practical flow scheduling approach that minimizes FCT with no prior knowledge using commodity switches. At its heart, Pias leverages multiple priority queues available in commodity switches to implement a Multiple Level Feedback Queue (MLFQ), in which a PIAS flow gradually demotes from higher-priority queues to lower-priority queues based on the bytes it has sent. In this way, short flows are prioritized over long flows, which enables Pias to emulate Shortest Job First (SJF) scheduling without knowing the flow sizes beforehand. Our preliminary evaluation shows that Pias significantly outperforms all existing information-agnostic solutions. It improves average FCT for short flows by up to 50 and 40 over DCTCP [3] and L2DCT [16]. Compared to an ideal information-aware DCN transport, p-Fabric [5], it only shows 4.9 performance degradation for short flows in a production datacenter workload." ] }
1812.06553
2905548585
Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. This includes tremendous volumes of bulk multicast traffic generated as a result of data and content replication. Although one can perform these transfers using a single multicast forwarding tree, that can lead to poor performance as the slowest receiver on each tree dictates the completion time for all receivers. Using multiple trees per transfer each connected to a subset of receivers alleviates this concern. The choice of multicast trees also determines the total bandwidth usage. To further improve the performance, bandwidth over dedicated inter-datacenter networks can be carved for different multicast trees over specific time periods to avoid congestion and minimize the average receiver completion times. In this paper, we break this problem into the three sub-problems of partitioning, tree selection, and rate allocation. We present an algorithm called QuickCast which is computationally fast and allows us to significantly speed up multiple receivers per bulk multicast transfer with control over extra bandwidth consumption. We evaluate QuickCast against a variety of synthetic and real traffic patterns as well as real WAN topologies. Compared to performing bulk multicast transfers as separate unicast transfers, QuickCast achieves up to @math reduction in mean completion times while at the same time using @math the bandwidth. Also, QuickCast allows the top @math of receivers to complete between @math to @math faster on average compared with when a single forwarding multicast tree is used for data delivery.
Various techniques have been proposed to make multicasting reliable including the use of coding and receiver (negative or positive) acknowledgments. Experiments have shown that using positive ACKs does not lead to ACK implosion for medium scale (sub-thousand) receiver groups @cite_58 . TCP-XM @cite_37 allows reliable delivery by using a combination of IP multicast and unicast for data delivery and re-transmissions. MCTCP @cite_57 applies standard TCP mechanisms for reliability. Another approach is for receivers to send NAKs upon expiration of some inactivity timer @cite_19 . NAK suppression has been proposed to address implosion which can be applied by routers @cite_7 . Forward Error Correction (FEC) has been used to reduce re-transmissions @cite_19 and improve the completion times @cite_13 examples of which include Raptor Codes @cite_38 and Tornado Codes @cite_52 . These techniques can be applied complementary to QuickCast.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_52", "@cite_57", "@cite_19", "@cite_58", "@cite_13" ], "mid": [ "", "2120679453", "1539914100", "2161342511", "2537090207", "2240045799", "2162111441", "" ], "abstract": [ "", "In recent years, much work has been done on attempting to scale multicast data transmission to hundreds or thousands of receivers. There are, however, many situations where an application might involve transmission to just ten or twenty sites. Using multicast for this type of application can provide significant benefits including reduced load on the transmitter an overall reduction in network traffic, and consequently shorter data transfer times. In this project, we are investigating how partial or incomplete multicast can be exploited alongside reliable unicast to improve both speed and efficiency of data transfers while maintaining reliability. The approach taken is to combine unicast with multicast by modifying TCP to support multicast transfers, and run this modified TCP engine over UDP as a userspace transport protocol. We describe the work to date on the design and implementation, and provide experimental results from our tests across both local and wide area networks.", "This paper presents a novel loss recovery scheme, active reliable multicast (ARM), for large scale reliable multicast. ARM is \"active\" in that routers in the multicast tree play an active role in loss recovery. Additionally, ARM utilizes soft-state storage within the network to improve performance and scalability. In the upstream direction, routers suppress duplicate NACKs from multiple receivers to control the implosion problem. By suppressing duplicate NACKs, ARM also lessens the traffic that propagates back through the network, In the downstream direction, routers limit the delivery of repair packets to receivers experiencing loss, thereby reducing network bandwidth consumption. Finally, to reduce wide-area recovery latency and to distribute the retransmission load, routers cache multicast data on a \"best effort\" basis. ARM is flexible and robust in that it does not require all nodes to be active, nor does it require any specific router or receiver to perform loss recovery. Analysis and simulation results show that ARM yields significant benefits even when less than half the routers within the multicast tree can perform ARM processing.", "The proliferation of applications that must reliably distribute bulk data to a large number of autonomous clients motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous clients to acquire bulk data with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates.We develop a protocol that closely approximates a digital fountain using a new class of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation and performance of an experimental system.", "Continuously enriched distributed systems in data centers generate much network traffic in push-style one-to-many group mode, raising new requirements for multicast transport in terms of efficiency and robustness. Existing reliable multicast solutions, which suffer from low robustness and inefficiency in either host-side protocols or multicast routing, are not suitable for data centers. In order to address the problems of inefficiency and low robustness, we present a sender-initiated, efficient, congestion-aware and robust reliable multicast solution mainly for small groups in SDN-based data centers, called MCTCP. The main idea behind MCTCP is to manage the multicast groups in a centralized manner, and reactively schedule multicast flows to active and low-utilized links, by extending TCP as the host-side protocol and managing multicast groups in the SDN-controller. The multicast spanning trees are calculated and adjusted according to the network status to perform a better allocation of resources. Our experiments show that, MCTCP can dynamically bypass the congested and failing links, achieving high efficiency and robustness. As a result, MCTCP outperforms the state-of-the-art reliable multicast schemes. Moreover, MCTCP improves the performance of data replication in HDFS compared with the original and TCP-SMO based ones, e.g., achieves 101 and 50 improvements in terms of bandwidth, respectively.", "This document describes the messages and procedures of the Negative- ACKnowledgment (NACK) Oriented Reliable Multicast (NORM) Protocol. This protocol is designed to provide end-to-end reliable transport of bulk data objects or streams over generic IP multicast routing and forwarding services. NORM uses a selective, negative acknowledgment mechanism for transport reliability and offers additional protocol mechanisms to allow for operation with minimal a priori coordination among senders and receivers. A congestion control scheme is specified to allow the NORM protocol to fairly share available network bandwidth with other transport protocols such as Transmission Control Protocol (TCP). It is capable of operating with both reciprocal multicast routing among senders and receivers and with asymmetric connectivity (possibly a unicast return path) between the senders and receivers. The protocol offers a number of features to allow different types of applications or possibly other higher level transport protocols to utilize its service in different ways. The protocol leverages the use of FEC-based repair and other IETF reliable multicast transport (RMT) building blocks in its design.", "Scalable reliable multicast protocols have been the focus of recent research, tackling the problem of efficient reliable data delivery to an arbitrarily large number of receivers. Yet, the common applications of multicast, such as multi-point file delivery, or video streaming from a media server, typically only involve a moderate number of receivers, such as a thousand or fewer. Moreover, because of the limited deployment of these specialized multicast protocols, it is common, when feasible, for applications to use multiple TCP connections instead, one for each receiver, to implement multi-point delivery, causing a significant demand on the transmission server and the downstream links. We describe a multicast extension to TCP, called single-source multicast optimization (SMO), that optimizes this case of multipoint delivery, providing the benefits of multicast together with the familiar features and API of TCP. Our results from experiments based on a Linux implementation and performed on a testbed show that TCP-SMO requires just a modest extension to the TCP implementation and provides scalable performance of multicast up to over a thousand receivers, thereby satisfying the common case requirements. In addition, used with TCP-RTM (real-time mode), TCP-SMO also supports real-time multimedia multicast applications well.", "" ] }
1812.06553
2905548585
Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. This includes tremendous volumes of bulk multicast traffic generated as a result of data and content replication. Although one can perform these transfers using a single multicast forwarding tree, that can lead to poor performance as the slowest receiver on each tree dictates the completion time for all receivers. Using multiple trees per transfer each connected to a subset of receivers alleviates this concern. The choice of multicast trees also determines the total bandwidth usage. To further improve the performance, bandwidth over dedicated inter-datacenter networks can be carved for different multicast trees over specific time periods to avoid congestion and minimize the average receiver completion times. In this paper, we break this problem into the three sub-problems of partitioning, tree selection, and rate allocation. We present an algorithm called QuickCast which is computationally fast and allows us to significantly speed up multiple receivers per bulk multicast transfer with control over extra bandwidth consumption. We evaluate QuickCast against a variety of synthetic and real traffic patterns as well as real WAN topologies. Compared to performing bulk multicast transfers as separate unicast transfers, QuickCast achieves up to @math reduction in mean completion times while at the same time using @math the bandwidth. Also, QuickCast allows the top @math of receivers to complete between @math to @math faster on average compared with when a single forwarding multicast tree is used for data delivery.
Existing approaches track the slowest receiver. PGMCC @cite_47 , MCTCP @cite_57 and TCP-SMO @cite_58 use window-based TCP like congestion control to compete fairly with other flows. NORM @cite_19 uses an equation-based rate control scheme. With rate allocation and end-host based rate limiting applied over inter-datacenter networks, need for distributed congestion control becomes minimal; however, such techniques can still be used as a backup.
{ "cite_N": [ "@cite_57", "@cite_19", "@cite_47", "@cite_58" ], "mid": [ "2537090207", "2240045799", "2024370454", "2162111441" ], "abstract": [ "Continuously enriched distributed systems in data centers generate much network traffic in push-style one-to-many group mode, raising new requirements for multicast transport in terms of efficiency and robustness. Existing reliable multicast solutions, which suffer from low robustness and inefficiency in either host-side protocols or multicast routing, are not suitable for data centers. In order to address the problems of inefficiency and low robustness, we present a sender-initiated, efficient, congestion-aware and robust reliable multicast solution mainly for small groups in SDN-based data centers, called MCTCP. The main idea behind MCTCP is to manage the multicast groups in a centralized manner, and reactively schedule multicast flows to active and low-utilized links, by extending TCP as the host-side protocol and managing multicast groups in the SDN-controller. The multicast spanning trees are calculated and adjusted according to the network status to perform a better allocation of resources. Our experiments show that, MCTCP can dynamically bypass the congested and failing links, achieving high efficiency and robustness. As a result, MCTCP outperforms the state-of-the-art reliable multicast schemes. Moreover, MCTCP improves the performance of data replication in HDFS compared with the original and TCP-SMO based ones, e.g., achieves 101 and 50 improvements in terms of bandwidth, respectively.", "This document describes the messages and procedures of the Negative- ACKnowledgment (NACK) Oriented Reliable Multicast (NORM) Protocol. This protocol is designed to provide end-to-end reliable transport of bulk data objects or streams over generic IP multicast routing and forwarding services. NORM uses a selective, negative acknowledgment mechanism for transport reliability and offers additional protocol mechanisms to allow for operation with minimal a priori coordination among senders and receivers. A congestion control scheme is specified to allow the NORM protocol to fairly share available network bandwidth with other transport protocols such as Transmission Control Protocol (TCP). It is capable of operating with both reciprocal multicast routing among senders and receivers and with asymmetric connectivity (possibly a unicast return path) between the senders and receivers. The protocol offers a number of features to allow different types of applications or possibly other higher level transport protocols to utilize its service in different ways. The protocol leverages the use of FEC-based repair and other IETF reliable multicast transport (RMT) building blocks in its design.", "We present a single rate multicast congestion control scheme(pgmcc) which is TCP-friendly and achieves scalability, stability and fast response to variations in network conditions. pgmcc is suitable for both non-reliable and reliable data transfers; it uses a window-based TCP-like controller based on positive ACKs and run between the sender and a group's representative, the acker . The innovative part of pgmcc is a fast and low-overhead procedure to select (and track changes of) the acker, which permits us to consider the acker as a moving receiver rather than a changing one. As such, the scheme is robust to measurement errors, and supports fast response to changes in the receiver set and or network conditions. The scheme has been implemented in the PGM protocol, and the paper presents a number of experimental results on its performance.", "Scalable reliable multicast protocols have been the focus of recent research, tackling the problem of efficient reliable data delivery to an arbitrarily large number of receivers. Yet, the common applications of multicast, such as multi-point file delivery, or video streaming from a media server, typically only involve a moderate number of receivers, such as a thousand or fewer. Moreover, because of the limited deployment of these specialized multicast protocols, it is common, when feasible, for applications to use multiple TCP connections instead, one for each receiver, to implement multi-point delivery, causing a significant demand on the transmission server and the downstream links. We describe a multicast extension to TCP, called single-source multicast optimization (SMO), that optimizes this case of multipoint delivery, providing the benefits of multicast together with the familiar features and API of TCP. Our results from experiments based on a Linux implementation and performed on a testbed show that TCP-SMO requires just a modest extension to the TCP implementation and provides scalable performance of multicast up to over a thousand receivers, thereby satisfying the common case requirements. In addition, used with TCP-RTM (real-time mode), TCP-SMO also supports real-time multimedia multicast applications well." ] }
1812.06700
2904236443
With the online proliferation of hate speech, there is an urgent need for systems that can detect such harmful content. In this paper, We present the machine learning models developed for the Automatic Misogyny Identification (AMI) shared task at EVALITA 2018. We generate three types of features: Sentence Embeddings, TF-IDF Vectors, and BOW Vectors to represent each tweet. These features are then concatenated and fed into the machine learning models. Our model came First for the English Subtask A and Fifth for the English Subtask B. We release our winning model for public use and it's available at this https URL.
The research on hatespeech is gaining momentum with several works which focus on different aspects such as analyzing hatespeech @cite_12 @cite_17 @cite_0 @cite_15 @cite_3 , and detection of hatespeech @cite_10 @cite_16 @cite_6 .
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_0", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2952038496", "", "2953180101", "", "2951737564", "2887782043", "2796881724", "2903015761" ], "abstract": [ "With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective -- a cutting-edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.", "", "Social media systems allow Internet users a congenial platform to freely express their thoughts and opinions. Although this property represents incredible and unique communication opportunities, it also brings along important challenges. Online hate speech is an archetypal example of such challenges. Despite its magnitude and scale, there is a significant gap in understanding the nature of hate speech on social media. In this paper, we provide the first of a kind systematic large scale measurement study of the main targets of hate speech in online social media. To do that, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both these systems. Our results identify online hate speech forms and offer a broader understanding of the phenomenon, providing directions for prevention and detection approaches.", "", "A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.", "The scientific study of hate speech, from a computer science point of view, is recent. This survey organizes and describes the current state of the field, providing a structured overview of previous approaches, including core algorithms, methods, and main features used. This work also discusses the complexity of the concept of hate speech, defined in many platforms and contexts, and provides a unifying definition. This area has an unquestionable potential for societ al impact, particularly in online communities and digital media platforms. The development and systematization of shared resources, such as guidelines, annotated datasets in multiple languages, and algorithms, is a crucial step in advancing the automatic detection of hate speech.", "While social media has become an empowering agent to individual voices and freedom of expression, it also facilitates anti-social behaviors including online harassment, cyberbullying, and hate speech. In this paper, we present the first comparative study of hate speech instigators and target users on Twitter. Through a multi-step classification process, we curate a comprehensive hate speech dataset capturing various types of hate. We study the distinctive characteristics of hate instigators and targets in terms of their profile self-presentation, activities, and online visibility. We find that hate instigators target more popular and high profile Twitter users, and that participating in hate speech can result in greater online visibility. We conduct a personality analysis of hate instigators and targets and show that both groups have eccentric personality facets that differ from the general Twitter population. Our results advance the state of the art of understanding online hate speech engagement.", "The present online social media platform is afflicted with several issues, with hate speech being on the predominant forefront. The prevalence of online hate speech has fueled horrific real-world hate-crime such as the mass-genocide of Rohingya Muslims, communal violence in Colombo and the recent massacre in the Pittsburgh synagogue. Consequently, It is imperative to understand the diffusion of such hateful content in an online setting. We conduct the first study that analyses the flow and dynamics of posts generated by hateful and non-hateful users on Gab (gab.com) over a massive dataset of 341K users and 21M posts. Our observations confirms that hateful content diffuse farther, wider and faster and have a greater outreach than those of non-hateful users. A deeper inspection into the profiles and network of hateful and non-hateful users reveals that the former are more influential, popular and cohesive. Thus, our research explores the interesting facets of diffusion dynamics of hateful users and broadens our understanding of hate speech in the online world." ] }
1812.06589
2904622387
Given an arbitrary speech clip and a facial image, talking face generation aims to synthesize a talking face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video speech. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, speech audio and video often have cross-modality coherence that has not been well addressed during synthesis. Therefore, this paper proposes a novel high-resolution talking face generation model for arbitrary person by discovering the cross-modality coherence via Mutual Information Approximation (MIA). By assuming the modality difference between audio and video is larger that of real video and generated video, we estimate mutual information between real audio and video, and then use a discriminator to enforce generated video distribution approach real video distribution. Furthermore, we introduce a dynamic attention technique on the mouth to enhance the robustness during the training stage. Experimental results on benchmark dataset LRW transcend the state-of-the-art methods on prevalent metrics with robustness on gender, pose variations and high-resolution synthesizing.
Earlier works on talking face generation mainly synthesize the specific identity from the dataset by given an arbitrary speech audio. use a time-delayed LSTM @cite_2 to generate key points synced to the audio and use another network to generate the video frames conditioned on the key points. Furthermore, propose a teeth proxy to improve the quality of the teeth during generation.
{ "cite_N": [ "@cite_2" ], "mid": [ "2952232639" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1812.06589
2904622387
Given an arbitrary speech clip and a facial image, talking face generation aims to synthesize a talking face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video speech. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, speech audio and video often have cross-modality coherence that has not been well addressed during synthesis. Therefore, this paper proposes a novel high-resolution talking face generation model for arbitrary person by discovering the cross-modality coherence via Mutual Information Approximation (MIA). By assuming the modality difference between audio and video is larger that of real video and generated video, we estimate mutual information between real audio and video, and then use a discriminator to enforce generated video distribution approach real video distribution. Furthermore, we introduce a dynamic attention technique on the mouth to enhance the robustness during the training stage. Experimental results on benchmark dataset LRW transcend the state-of-the-art methods on prevalent metrics with robustness on gender, pose variations and high-resolution synthesizing.
In the following, attempt to adopt an encoder-decoder CNN model to learn the correspondences between raw audio and video data. propose a deep neural network to learn a mapping from input waveforms to the 3D vertex coordinates of a face model. The network discovers a latent code to disambiguate facial expression variations simultaneously. introduce a recurrent neural network into the conditional GAN @cite_9 to produce a sequence of natural faces in sync with an input audio track. @cite_11 utilize an LSTM network @cite_5 to create lip landmarks out of audio input. @cite_13 employ a Temporal GAN to capture the temporal information and therefore to improve the quality of synthesizing. However, these methods are only applicable to synthesize the talking faces for the identities from the dataset. Recently, the synthesis of the talking face for the arbitrary identities out of the dataset has drawn much attention. propose to leverage the optical flow for better express the information between frames. propose an adversarial learning method to disentangle the different information for one image during generation. However, since different identities have large appearance difference, it is challenging to synthesize the talking face for arbitrary identities.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_13", "@cite_11" ], "mid": [ "", "2099471712", "2804600264", "1569907127" ], "abstract": [ "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn't rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.", "Long short-term memory (LSTM) is a specific recurrent neural network (RNN) architecture that is designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose to use deep bidirectional LSTM (BLSTM) for audio visual modeling in our photo-real talking head system. An audio visual database of a subject's talking is firstly recorded as our training data. The audio visual stereo data are converted into two parallel temporal sequences, i.e., contextual label sequences obtained by forced aligning audio against text, and visual feature sequences by applying active-appearance-model (AAM) on the lower face region among all the training image samples. The deep BLSTM is then trained to learn the regression model by minimizing the sum of square error (SSE) of predicting visual sequence from label sequence. After testing different network topologies, we interestingly found the best network is two BLSTM layers sitting on top of one feed-forward layer on our datasets. Compared with our previous HMM-based system, the newly proposed deep BLSTM-based one is better on both objective measurement and subjective A B test." ] }
1812.06589
2904622387
Given an arbitrary speech clip and a facial image, talking face generation aims to synthesize a talking face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video speech. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, speech audio and video often have cross-modality coherence that has not been well addressed during synthesis. Therefore, this paper proposes a novel high-resolution talking face generation model for arbitrary person by discovering the cross-modality coherence via Mutual Information Approximation (MIA). By assuming the modality difference between audio and video is larger that of real video and generated video, we estimate mutual information between real audio and video, and then use a discriminator to enforce generated video distribution approach real video distribution. Furthermore, we introduce a dynamic attention technique on the mouth to enhance the robustness during the training stage. Experimental results on benchmark dataset LRW transcend the state-of-the-art methods on prevalent metrics with robustness on gender, pose variations and high-resolution synthesizing.
One of the pioneers is to calculate the relative frequencies on appropriate partitions to approximate mutual information @cite_6 . propose a popular KNN-based estimator modified from the entropy estimator . Recent works try to employ parameters-free approaches , or rely on approximate Gaussianity of data distribution to estimate the mutual information. In order to reduce the bias and preserve the variance, @cite_22 propose to estimate the entropy or divergence by ensembling some simple plug-in estimators with varying neighborhood sizes.
{ "cite_N": [ "@cite_22", "@cite_6" ], "mid": [ "2141652941", "2951744766" ], "abstract": [ "We develop the general, multivariate case of the Edgeworth approximation of differential entropy and show that it can be more accurate than the nearest-neighbor method in the multivariate case and that it scales better with sample size. Furthermore, we introduce mutual information estimation as an application.", "We derive the mean squared error convergence rates of kernel density-based plug-in estimators of mutual information measures between two multidimensional random variables @math and @math for two cases: 1) @math and @math are both continuous; 2) @math is continuous and @math is discrete. Using the derived rates, we propose an ensemble estimator of these information measures for the second case by taking a weighted sum of the plug-in estimators with varied bandwidths. The resulting ensemble estimator achieves the @math parametric convergence rate when the conditional densities of the continuous variables are sufficiently smooth. To the best of our knowledge, this is the first nonparametric mutual information estimator known to achieve the parametric convergence rate for this case, which frequently arises in applications (e.g. variable selection in classification). The estimator is simple to implement as it uses the solution to an offline convex optimization problem and simple plug-in estimators. A central limit theorem is also derived for the ensemble estimator. Ensemble estimators that achieve the parametric rate are also derived for the first case ( @math and @math are both continuous) and another case 3) @math and @math may have any mixture of discrete and continuous components." ] }
1812.06598
2905543510
Discovering community structure in complex networks is a mature field since a tremendous number of community detection methods have been introduced in the literature. Nevertheless, it is still very challenging for practioners to determine which method would be suitable to get insights into the structural information of the networks they study. Many recent efforts have been devoted to investigating various quality scores of the community structure, but the problem of distinguishing between different types of communities is still open. In this paper, we propose a comparative, extensive and empirical study to investigate what types of communities many state-of-the-art and well-known community detection methods are producing. Specifically, we provide comprehensive analyses on computation time, community size distribution, a comparative evaluation of methods according to their optimisation schemes as well as a comparison of their partioning strategy through validation metrics. We process our analyses on a very large corpus of hundreds of networks from five different network categories and propose ways to classify community detection methods, helping a potential user to navigate the complex landscape of community detection.
Agreste evaluate different community detection algorithms in a empirical and comparative approach, especially for the context of web data analytic @cite_38 . The authors find that and recommend that the label propagation method (LPA) , which is also in a global agreement with our analysis in Section providing predictions about required time of each method in function of network size. They also conclude that algorithm showcased the best trade-off between accuracy and computational performance'' based on @math score. The conclusion could be valid in some specific cases when the expected ground-truth community structure is well understood. Otherwise, some additional analyses should be done to determine whether information aligns the final objective of community detection algorithms In fact, metadata information of nodes are usually used in practice as ground-truth community structure. However, it has been found that metadata communities are sometimes very sparse @cite_46 . @cite_20 .
{ "cite_N": [ "@cite_38", "@cite_46", "@cite_20" ], "mid": [ "2552367669", "2610304566", "2513567506" ], "abstract": [ "Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web-e.g., microblogging Web sites, trust networks or the Web graph itself-are often directed . Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.", "Evaluating a network partition just only via conventional quality metrics - such as modularity, conductance or normalized mutual of information - is usually insufficient. Indeed, global quality scores of a network partition or its clusters do not provide many ideas about their structural characteristics. Furthermore, quality metrics often fail to reach an agreement especially in networks whose modular structures are not very obvious. Evaluating the goodness of network partitions in function of desired structural properties is still a challenge. Here, we propose a methodology that allows one to expose structural information of clusters in a network partition in a comprehensive way, thus eventually helps one to compare communities identified by different community detection methods. This descriptive approach also helps to clarify the composition of communities in real-world networks. The methodology hence bring us a step closer to the understanding of modular structures in complex networks.", "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures." ] }
1812.06598
2905543510
Discovering community structure in complex networks is a mature field since a tremendous number of community detection methods have been introduced in the literature. Nevertheless, it is still very challenging for practioners to determine which method would be suitable to get insights into the structural information of the networks they study. Many recent efforts have been devoted to investigating various quality scores of the community structure, but the problem of distinguishing between different types of communities is still open. In this paper, we propose a comparative, extensive and empirical study to investigate what types of communities many state-of-the-art and well-known community detection methods are producing. Specifically, we provide comprehensive analyses on computation time, community size distribution, a comparative evaluation of methods according to their optimisation schemes as well as a comparison of their partioning strategy through validation metrics. We process our analyses on a very large corpus of hundreds of networks from five different network categories and propose ways to classify community detection methods, helping a potential user to navigate the complex landscape of community detection.
Ghasemian present in a recent publication that an evaluation of overfitting and underfitting of several community detection models @cite_33 . The authors study the number of communities detected in practice by many methods and the maximum number of detectable clusters according to a theoretical model. Some conclusions are drawn about fitting qualities of methods in comparison to theoretical estimates. This study provides evidences that help to choose an appropriate method in function of fitting quality. Community detection methods are also grouped in distinct families based on their outputs on many real-world networks (similarly to our analysis in Section ) using @math metric. The authors also find that , which is aligned with our results through several analyses.
{ "cite_N": [ "@cite_33" ], "mid": [ "2788030877" ], "abstract": [ "A common data mining task on networks is community detection, which seeks an unsupervised decomposition of a network into structural groups based on statistical regularities in the network's connectivity. Although many methods exist, the No Free Lunch theorem for community detection implies that each makes some kind of tradeoff, and no algorithm can be optimal on all inputs. Thus, different algorithms will over or underfit on different inputs, finding more, fewer, or just different communities than is optimal, and evaluation methods that use a metadata partition as a ground truth will produce misleading conclusions about general accuracy. Here, we present a broad evaluation of over and underfitting in community detection, comparing the behavior of 16 state-of-the-art community detection algorithms on a novel and structurally diverse corpus of 406 real-world networks. We find that (i) algorithms vary widely both in the number of communities they find and in their corresponding composition, given the same input, (ii) algorithms can be clustered into distinct high-level groups based on similarities of their outputs on real-world networks, and (iii) these differences induce wide variation in accuracy on link prediction and link description tasks. We introduce a new diagnostic for evaluating overfitting and underfitting in practice, and use it to roughly divide community detection methods into general and specialized learning algorithms. Across methods and inputs, Bayesian techniques based on the stochastic block model and a minimum description length approach to regularization represent the best general learning approach, but can be outperformed under specific circumstances. These results introduce both a theoretically principled approach to evaluate over and underfitting in models of network community structure and a realistic benchmark by which new methods may be evaluated and compared." ] }
1907.02244
2954814982
In this age of social media, people often look at what others are wearing. In particular, Instagram and Twitter influencers often provide images of themselves wearing different outfits and their followers are often inspired to buy similar clothes.We propose a system to automatically find the closest visually similar clothes in the online Catalog (street-to-shop searching). The problem is challenging since the original images are taken under different pose and lighting conditions. The system initially localizes high-level descriptive regions (top, bottom, wristwear. . . ) using multiple CNN detectors such as YOLO and SSD that are trained specifically for apparel domain. It then classifies these regions into more specific regions such as t-shirts, tunic or dresses. Finally, a feature embedding learned using a multi-task function is recovered for every item and then compared with corresponding items in the online Catalog database and ranked according to distance. We validate our approach component-wise using benchmark datasets and end-to-end using human evaluation.
* Localization We review here a number of main academic approaches for localizing apparel items. In @cite_3 , deep CNNs were trained to predict a set of fashion landmarks, such as shoulder points or neck points. However, landmarks for clothes are sometimes not well defined and are often sensitive to occlusions. Human pose estimation ( @cite_10 ) could be used as the mean to infer apparel item’s locations. One drawback is that they are not applicable if people were not present in the image. Localization could also be carried out through cloth parsing and (semantic) segmentation such as LIP ( @cite_1 ). While the performance has been promising on standard datasets, it is more computationally expensive and the recall has been unsatisfactory in the initial evaluation on our targeted datasets (e.g. fashion influencer images). For simplicity and scalability, in this work, we localize apparel items using bounding boxes. Top performing multi-box object detectors such as SSD ( @cite_14 ), YOLO V3 ( @cite_2 ) with different network bodies and different resolutions are used both for run-time queries and offline index construction.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_3", "@cite_2", "@cite_10" ], "mid": [ "2193145675", "2598915960", "2471768434", "", "2951856387" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Human parsing has recently attracted a lot of research interests due to its huge application potentials. However existing datasets have limited number of images and annotations, and lack the variety of human appearances and the coverage of challenging cases in unconstrained environment. In this paper, we introduce a new benchmark Look into Person (LIP) that makes a significant advance in terms of scalability, diversity and difficulty, a contribution that we feel is crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels, which are captured from a wider range of viewpoints, occlusions and background complexity. Given these rich annotations we perform detailed analysis of the leading human parsing approaches, gaining insights into the success and failures of these methods. Furthermore, in contrast to the existing efforts on improving the feature discriminative capability, we solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). Our self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results. Extensive evaluations on our LIP and the public PASCAL-Person-Part dataset demonstrate the superiority of our method.", "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.", "", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency." ] }
1907.02244
2954814982
In this age of social media, people often look at what others are wearing. In particular, Instagram and Twitter influencers often provide images of themselves wearing different outfits and their followers are often inspired to buy similar clothes.We propose a system to automatically find the closest visually similar clothes in the online Catalog (street-to-shop searching). The problem is challenging since the original images are taken under different pose and lighting conditions. The system initially localizes high-level descriptive regions (top, bottom, wristwear. . . ) using multiple CNN detectors such as YOLO and SSD that are trained specifically for apparel domain. It then classifies these regions into more specific regions such as t-shirts, tunic or dresses. Finally, a feature embedding learned using a multi-task function is recovered for every item and then compared with corresponding items in the online Catalog database and ranked according to distance. We validate our approach component-wise using benchmark datasets and end-to-end using human evaluation.
* Classification Recognizing the product type accurately for apparel is of critical importance in finding similar items from the catalog. In the literature, a number of different classifications have been adopted for clothing items. This is partially a function of what the labeled input datasets provide ( @cite_3 , @cite_13 ). Most of these papers limit themselves to a small number of classes (less than 60 classes, see, e.g. @cite_23 ) with many high-level classes containing highly dissimilar objects (i.e., different product types). In this work, we create fine-grained classification of 146 classes. Our fine-grained breakdown matches well to product-type levels in online Catalog (for men and women’s clothing) and are typically visually distinctive. Additionally, since we use the Catalog as the search database, matching its hierarchy also greatly facilitates the later indexing and retrieval stages.
{ "cite_N": [ "@cite_23", "@cite_13", "@cite_3" ], "mid": [ "2811481004", "2121339428", "2471768434" ], "abstract": [ "Understanding clothes from a single image would have huge commercial and cultural impacts on modern societies. However, this task remains a challenging computer vision problem due to wide variations in the appearance, style, brand and layering of clothing items. We present a new database called ModaNet, a large-scale collection of images based on Paperdoll dataset. Our dataset provides 55,176 street images, fully annotated with polygons on top of the 1 million weakly annotated street images in Paperdoll. ModaNet aims to provide a technical benchmark to fairly evaluate the progress of applying the latest computer vision techniques that rely on large data for fashion understanding. The rich annotation of the dataset allows to measure the performance of state-of-the-art algorithms for object detection, semantic segmentation and polygon prediction on street fashion images in detail.", "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.", "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion." ] }
1907.02244
2954814982
In this age of social media, people often look at what others are wearing. In particular, Instagram and Twitter influencers often provide images of themselves wearing different outfits and their followers are often inspired to buy similar clothes.We propose a system to automatically find the closest visually similar clothes in the online Catalog (street-to-shop searching). The problem is challenging since the original images are taken under different pose and lighting conditions. The system initially localizes high-level descriptive regions (top, bottom, wristwear. . . ) using multiple CNN detectors such as YOLO and SSD that are trained specifically for apparel domain. It then classifies these regions into more specific regions such as t-shirts, tunic or dresses. Finally, a feature embedding learned using a multi-task function is recovered for every item and then compared with corresponding items in the online Catalog database and ranked according to distance. We validate our approach component-wise using benchmark datasets and end-to-end using human evaluation.
Visual similarity search can be done by searching for the nearest neighbors to an embedding extracted from certain intermediate layer(s) in a deep neural network trained for surrogate tasks (see @cite_23 , @cite_21 , @cite_18 and @cite_25 ). The deep network can be trained with cross entropy loss (classification), contrastive loss (pairs), triplet loss ( @cite_22 , @cite_21 ...) or quadruplet loss ( @cite_7 ). There seem to be no clear winner between these options (see e.g. @cite_18 and @cite_6 ). Obtaining the highest accuracy seems to depend on careful sampling and tuning strategies for the specific problem (see @cite_25 ). Our approach in this paper, in contrast to parallel efforts in our team, is to directly use the embedding feature extracted from the fine-grained classification network thus avoiding the need for a separate CNN to find an embedding feature. This also helps simplify the engineering efforts and reduce run time latency.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_21", "@cite_6", "@cite_23", "@cite_25" ], "mid": [ "2144172034", "2591921973", "2606377603", "1975517671", "2598634450", "2811481004", "2547446130" ], "abstract": [ "The key challenge of face recognition is to develop effective feature representations for reducing intra-personal variations while enlarging inter-personal differences. In this paper, we show that it can be well solved with deep learning and using both face identification and verification signals as supervision. The Deep IDentification-verification features (DeepID2) are learned with carefully designed deep convolutional networks. The face identification task increases the inter-personal variations by drawing DeepID2 features extracted from different identities apart, while the face verification task reduces the intra-personal variations by pulling DeepID2 features extracted from the same identity together, both of which are essential to face recognition. The learned DeepID2 features can be well generalized to new identities unseen in the training data. On the challenging LFW dataset [11], 99.15 face verification accuracy is achieved. Compared with the best previous deep learning result [20] on LFW, the error rate has been significantly reduced by 67 .", "In this paper, we present a unified end-to-end approach to build a large scale Visual Search and Recommendation system for e-commerce. Previous works have targeted these problems in isolation. We believe a more effective and elegant solution could be obtained by tackling them together. We propose a unified Deep Convolutional Neural Network architecture, called VisNet, to learn embeddings to capture the notion of visual similarity, across several semantic granularities. We demonstrate the superiority of our approach for the task of image retrieval, by comparing against the state-of-the-art on the Exact Street2Shop dataset. We then share the design decisions and trade-offs made while deploying the model to power Visual Recommendations across a catalog of 50M products, supporting 2K queries a second at Flipkart, India's largest e-commerce company. The deployment of our solution has yielded a significant business impact, as measured by the conversion-rate.", "Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.", "Understanding clothes from a single image would have huge commercial and cultural impacts on modern societies. However, this task remains a challenging computer vision problem due to wide variations in the appearance, style, brand and layering of clothing items. We present a new database called ModaNet, a large-scale collection of images based on Paperdoll dataset. Our dataset provides 55,176 street images, fully annotated with polygons on top of the 1 million weakly annotated street images in Paperdoll. ModaNet aims to provide a technical benchmark to fairly evaluate the progress of applying the latest computer vision techniques that rely on large data for fashion understanding. The rich annotation of the dataset allows to measure the performance of state-of-the-art algorithms for object detection, semantic segmentation and polygon prediction on street fashion images in detail.", "We suggest a new loss for learning deep embeddings. The key characteristics of the new loss is the absence of tunable parameters and very good results obtained across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) point pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on these probability estimates. We show that these operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. The experiments reveal favourable results compared to recently proposed loss functions." ] }
1907.02326
2954114218
We propose an interactive-predictive neural machine translation framework for easier model personalization using reinforcement and imitation learning. During the interactive translation process, the user is asked for feedback on uncertain locations identified by the system. Responses are weak feedback in the form of "keep" and "delete" edits, and expert demonstrations in the form of "substitute" edits. Conditioning on the collected feedback, the system creates alternative translations via constrained beam search. In simulation experiments on two language pairs our systems get close to the performance of supervised training with much less human effort.
Interactive-predictive translation goes back to early approaches for IBM-type @cite_6 @cite_23 and phrase-based machine translation @cite_7 @cite_26 . Knowles and Koehn and presented neural interactive translation prediction --- a translation scenario where translators interact with an NMT system by accepting or correcting subsequent target tokens suggested by the NMT system in an auto-complete style. However, in their work the system parameters are not updated based on the prefix. This idea is implemented in , , , , or . In contrast to our work, these approaches use complete post-edited sentences to update their system, while we update our model based on partial translations. Furthermore, our approach employs techniques to reduce the number of interactions. Our work is also closely related to approaches for interactive pre-post-editing @cite_21 @cite_22 . The core idea is to ask the translator to mark good segments and use these for a more informed re-decoding, while we integrate constraints derived from diverse human feedback to interactively improve decoding. Additionally, we try to reduce human effort by minimizing the number of feedback requests and by frequent model updates.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_21", "@cite_6", "@cite_23" ], "mid": [ "2251171258", "2251044602", "2100271871", "", "1514971736", "2164788644" ], "abstract": [ "Analyses of computer aided translation typically focus on either frontend interfaces and human effort, or backend translation and machine learnability of corrections. However, this distinction is artificial in practice since the frontend and backend must work in concert. We present the first holistic, quantitative evaluation of these issues by contrasting two assistive modes: postediting and interactive machine translation (MT). We describe a new translator interface, extensive modifications to a phrasebased MT system, and a novel objective function for re-tuning to human corrections. Evaluation with professional bilingual translators shows that post-edit is faster than interactive at the cost of translation quality for French-English and EnglishGerman. However, re-tuning the MT system to interactive output leads to larger, statistically significant reductions in HTER versus re-tuning to post-edit. Analysis shows that tuning directly to HTER results in fine-grained corrections to subsequent machine output.", "We introduce pre-post-editing, possibly the most basic form of interactive translation, as a touch-based interaction with iteratively improved translation hypotheses prior to classical post-editing. We report simulated experiments that yield very large improvements on classical evaluation metrics (up to 21 BLEU) as well as on a parameterized variant of the TER metric that takes into account the cost of matching touching tokens, confirming the promising prospects of the novel translation scenarios offered by our approach.", "Current machine translation (MT) systems are still not perfect. In practice, the output from these systems needs to be edited to correct errors. A way of increasing the productivity of the whole translation process (MT plus human work) is to incorporate the human correction activities within the translation process itself, thereby shifting the MT paradigm to that of computer-assisted translation. This model entails an iterative process in which the human translator activity is included in the loop: In each iteration, a prefix of the translation is validated (accepted or amended) by the human and the system computes its best (or n-best) translation suffix hypothesis to complete this prefix. A successful framework for MT is the so-called statistical (or pattern recognition) framework. Interestingly, within this framework, the adaptation of MT systems to the interactive scenario affects mainly the search process, allowing a great reuse of successful techniques and models. In this article, alignment templates, phrase-based models, and stochastic finite-state transducers are used to develop computer-assisted translation systems. These systems were assessed in a European project (TransType2) in two real tasks: The translation of printer manuals; manuals and the translation of the Bulletin of the European Union. In each task, the following three pairs of languages were involved (in both translation directions): English-Spanish, English-German, and English-French.", "", "The use of Machine Translation as a tool for professional or other highly skilled translators is for the most part currently limited to postediting arrangements in which the translator invokes MT when desired and then manually cleans up the results. A theoretically promising but hitherto largely unsuccessful alternative to postediting for this application is interactive machine translation (IMT), in which the translator and MT system work in tandem. We argue that past failures to make IMT viable as a tool for skilled translators have been the result of an infelicitous mode of interaction rather than any inherent flaw in the idea. As a solution, we propose a new style of IMT in which the target text under construction serves as the medium of communication between an MT system and its user. We describe the design, implementation, and performance of an automatic word completion system for translators which is intended to demonstrate the feasibility of the proposed approach, albeit in a very rudimentary form.", "Text prediction is a form of interactive machine translation that is well suited to skilled translators. In principle it can assist in the production of a target text with minimal disruption to a translator's normal routine. However, recent evaluations of a prototype prediction system showed that it significantly decreased the productivity of most translators who used it. In this paper, we analyze the reasons for this and propose a solution which consists in seeking predictions that maximize the expected benefit to the translator, rather than just trying to anticipate some amount of upcoming text. Using a model of a \"typical translator\" constructed from data collected in the evaluations of the prediction prototype, we show that this approach has the potential to turn text prediction into a help rather than a hindrance to a translator." ] }
1907.02326
2954114218
We propose an interactive-predictive neural machine translation framework for easier model personalization using reinforcement and imitation learning. During the interactive translation process, the user is asked for feedback on uncertain locations identified by the system. Responses are weak feedback in the form of "keep" and "delete" edits, and expert demonstrations in the form of "substitute" edits. Conditioning on the collected feedback, the system creates alternative translations via constrained beam search. In simulation experiments on two language pairs our systems get close to the performance of supervised training with much less human effort.
Gonz 'a lez- apply active learning for interactive machine translation, where a user interactively finishes translations of a statistical MT system. Their active learning component decides which sentences to sample for translation and receive supervision for, and the MT system is updated on-line @cite_16 . In our algorithm, the active learning component decides which prefixes to receive feedback for based on the entropy of the policy distribution.
{ "cite_N": [ "@cite_16" ], "mid": [ "1716250762" ], "abstract": [ "State-of-the-art Machine Translation (MT) systems are still far from being perfect. An alternative is the so-called Interactive Machine Translation (IMT) framework. In this framework, the knowledge of a human translator is combined with a MT system. The vast majority of the existing work on IMT makes use of the well-known batch learning paradigm. In the batch learning paradigm, the training of the IMT system and the interactive translation process are carried out in separate stages. This paradigm is not able to take advantage of the new knowledge produced by the user of the IMT system. In this paper, we present an application of the online learning paradigm to the IMT framework. In the online learning paradigm, the training and prediction stages are no longer separated. This feature is particularly useful in IMT since it allows the user feedback to be taken into account. The online learning techniques proposed here incrementally update the statistical models involved in the translation process. Empirical results show the great potential of online learning in the IMT framework." ] }
1907.02361
2954969685
The 5G New Radio (NR) standard for wireless communications supports the millimetre-wave (mmWave) spectrum to yield unprecedented improvement of the access network capacity. However, intermittent blockages in the mmWave signal may degrade the system performance and lead to the under-utilisation of the allocated resources. To circumvent this problem, the transmission slot-time shall be adjusted according to the blockage condition, avoiding the resource under-utilisation. In this paper, we propose that the 5G NR flexible numerology should be applied to adapt the slot-time in order to mitigate the blockage effects. We validate this claim by analysing the expected data rate of a mmWave system, under a range of blockage scenarios. We show that different blockage scenarios may require different numerologies to produce best performance, and that the correct choice of numerology may improve this performance by as much as hundreds of Mbps. Our results carry insights important for the design of blockage-aware scheduling mechanisms for 5G.
The literature on blockage mitigation in mmWave communication is mostly focused on techniques that rely on spatial macro-diversity. Such techniques allow the transmitter to find an alternative physical path for the mmWave signal when the primary LOS path fails due to a blockage event. The main techniques considered are: (i) : usage of surfaces made of materials that reflect the mmWave signal to cover an obstructed spot through a NLOS path @cite_17 @cite_28 ; (ii) : forwarding the transmission to a relay node that has a LOS path with the UE @cite_19 @cite_18 ; (iii) : moving the AP location during the transmission to a position where there is a LOS path @cite_24 ; (iv) : associating the UE with multiple AP , so the UE can have a LOS path served by a backup AP @cite_9 @cite_2 .
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_9", "@cite_24", "@cite_19", "@cite_2", "@cite_17" ], "mid": [ "2905177534", "2786009799", "2962725695", "2771699997", "2612195480", "2793438365", "2797160996" ], "abstract": [ "60 GHz millimeter-wave networks have emerged as a potential candidate for designing the next generation of multi-gigabit WLANs. Since the 60 GHz links suffer from frequent outages due to blockages caused by human mobility, deploying 60 GHz WLANs that can provide robust coverage in presence of blockages is a challenging problem. In this paper, we study blockage-aware coverage and deployment of 60 GHz WLANs. We first show that the reflection profile of an indoor environment can be sensed using a few measurements. A novel coverage metric (angular spread coverage) which captures the number of available paths and their spatial diversity is proposed. Additionally, it is shown that using relays can extend the coverage of the AP at a lower cost and provide added spatial diversity in the available paths. We propose a heuristic algorithm that determines the AP and relay locations while maximizing the angular spread coverage metric for the clients. Our testbed-based evaluation shows that for five different rooms, our proposed deployment can guarantee an average connectivity of 91.7 , 83.9 , and 74.1 of client locations in the presence of 1, 3 and 5 concurrent human blockages respectively, substantially increasing the robustness of 60 GHz links against blockages.", "Device to device (D2D) relaying and multi-beam reflection are two effective approaches to deal with the blockage problem in millimeter-wave (mmWave) communication, each with its own limitations when serving a large number of user equipments (UE). A combination of D2D relaying and multibeam reflection is expected to enhance the performance, but the selection of UEs to be served by each approache remains a challenge. In this paper, we consider adaptive mode selection between D2D relaying and multi-beam reflection in a time division duplex (TDD) mmWave network. We formulate a joint mode selection and resource sharing problem with the objective of maximizing the sum logarithm rate, and propose a two-stage solution algorithm. In the first stage, we derive the optimal resource sharing solution under the case that all UEs are served by D2D relaying. In the second stage, an adaptive algorithm is proposed to determine the set of UEs that switch from D2D relaying to multi-beam reflection. Simulation results demonstrate that the proposed scheme achieves considerable performance gain compared to several benchmark schemes.", "Multi-connectivity is emerging as promising solution to provide reliable communications and seamless connectivity at the millimeter-wave frequency range. Due to the obstacles that cause frequent interruptions at such high frequency range, connectivity to multiple cells can drastically increase the network performance in terms of throughput and reliability by coordination among the network elements. In this paper, we propose an algorithm for the link scheduling optimization that maximizes the network throughput for multi-connectivity in millimeter-wave cellular networks. The considered approach exploits a centralized architecture, fast link switching, proactive context preparation and data forwarding between millimeter-wave access points and the users. The proposed algorithm is able to numerically approach the global optimum and to quantify the potential gain of multi-connectivity in millimeter-wave cellular networks.", "Unmanned aerial vehicles (UAVs) are widely used in military due to the hostile environment and safety concern. To tackle the challenges on UAV communications, millimeter wave (MMwave) communication has been considered as a potential solution for both security and high throughput capability. However, insufficient research has been carried out for MMwave communications on drone, and no widely recognized channel model is available. In this paper, the blockage effect of MMwave communications, due to the drone propeller, is studied using hardware testbed. A scheme is proposed to predict the fast but periodic blockage and mitigates the throughput loss in such a ‘heartbeat’ channel.", "Millimeter-wave (mmWave) communications is one of the most promising candidate technologies for next generation cellular networks due to the global bandwidth shortage for mobile broadband access. The susceptibility of millimeter waveform propagation to blockages, however, may largely restrict the coverage of mmWave signals. To overcome blockages, we propose to leverage two-hop device-to-device (D2D) relaying. Using stochastic geometry, we develop a coverage probability model for the downlink of a relay- assisted mmWave cellular network using dominant interferer analysis, which accounts for both beamforming gains and blockages. Theoretical analysis and simulation results show that the downlink coverage of a mmWave cellular network can be improved by using two-hop D2D relay transmissions.", "Network softwarization is a major paradigm shift, which enables programmable and flexible system operation in challenging use cases. In the fifth-generation (5G) mobile networks, the more advanced scenarios envision transfer of high-rate mission-critical traffic. Achieving end-to-end reliability of these stringent sessions requires support from multiple radio access technologies and calls for dynamic orchestration of resources across both radio access and core network segments. Emerging 5G systems can already offer network slicing, multi-connectivity, and end-to-end quality provisioning mechanisms for critical data transfers within a single software-controlled network. Whereas these individual enablers are already in active development, a holistic perspective on how to construct a unified, service-ready system as well as understand the implications of critical traffic on serving other user sessions is not yet available. Against this background, this paper first introduces a softwarized 5G architecture for end-to-end reliability of the mission-critical traffic. Then, a mathematical framework is contributed to model the process of critical session transfers in a softwarized 5G access network, and the corresponding impact on other user sessions is quantified. Finally, a prototype hardware implementation is completed to investigate the practical effects of supporting mission-critical data in a softwarized 5G core network, as well as substantiate the key system design choices.", "Milimeter-wave links can provide GBit s data rates but are highly susceptible to blockage. In case a direct line-of-sight communication path becomes blocked, communication via a reflected path may allow to maintain connectivity. A common approach is to switch to such an alternative path whenever the first path becomes blocked. However, this requires detecting the blockage and then reconfiguring the transceiver to use the new path which incurs latency. For traffic with strict latency or reliability requirements, or in highly dynamic environments where path switching would be frequent, using both paths concurrently can be more beneficial. In this paper, we consider using multiple paths and dividing the transmission power over those paths, instead of path switching. We propose an algorithm to allocate power among the different mmWave communication paths to overcome link blockage under randomly distributed obstacles. The power allocation algorithm is based on analysis of the blockage probabilities of the direct and reflected paths using geometric probability, to statistically maximize the overall capacity of the path between two nodes. We evaluate the performance of the proposed algorithm via simulation for various wireless environments." ] }
1907.02361
2954969685
The 5G New Radio (NR) standard for wireless communications supports the millimetre-wave (mmWave) spectrum to yield unprecedented improvement of the access network capacity. However, intermittent blockages in the mmWave signal may degrade the system performance and lead to the under-utilisation of the allocated resources. To circumvent this problem, the transmission slot-time shall be adjusted according to the blockage condition, avoiding the resource under-utilisation. In this paper, we propose that the 5G NR flexible numerology should be applied to adapt the slot-time in order to mitigate the blockage effects. We validate this claim by analysing the expected data rate of a mmWave system, under a range of blockage scenarios. We show that different blockage scenarios may require different numerologies to produce best performance, and that the correct choice of numerology may improve this performance by as much as hundreds of Mbps. Our results carry insights important for the design of blockage-aware scheduling mechanisms for 5G.
It is the responsibility of the MAC layer to coordinate the extra communication nodes (e.g., relay nodes, neighbour AP ), and provide a smooth handover between the AP , relays, or reflectors when the mmWave signal power fades due to blockage @cite_20 @cite_27 @cite_8 @cite_10 . However, the intermittent blockages together with fixed TTI may lead to poor utilisation of the transmission resources. Therefore, to avoid this under-utilisation, we propose the application of flexible numerology to mitigate blockage effects through MAC layer transmission time adaptation.
{ "cite_N": [ "@cite_27", "@cite_10", "@cite_20", "@cite_8" ], "mid": [ "", "2575875755", "2571468893", "2724926863" ], "abstract": [ "", "The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.", "Capacity and ultra-reliable communication are some of the requirements for 5th generation (5G) networks. One of the candidate technologies to satisfy capacity requirement is standalone Ultra Dense Network (UDN). However, UDNs are characterized by fast change of received signal strength that creates mobility challenges in terms of increased handovers and connection failures. In this paper, a low layer multiconnectivity scheme is presented for standalone UDN aiming at ultra-reliable communication that is free of interruptions from handover procedures and connection failures. Furthermore, the problem in managing of the set of serving cells, that are involved in multiconnectivity for each user, is formulated. By using numerical method, feasible scheme for management of the set of serving cells is derived. Performance of the proposed multiconnectivity scheme is evaluated and compared against single connectivity. It is shown that the proposed multiconnectivity scheme outperforms single connectivity considerably in terms of connection failures and cell-edge throughput.", "Leveraging multiple simultaneous small cell connections is an emerging and promising solution to enhance session continuity in millimeter-wave (mmWave) cellular systems that suffer from frequent link interruptions due to blockage in ultra-dense urban deployments. However, the available performance benefits of feasible multi-connectivity strategies as well as the tentative service quality gains that they promise remain an open research question. Addressing it requires the development of a novel performance evaluation methodology, which should consider: 1) the intricacies of mmWave radio propagation in realistic urban environments; 2) the dynamic mmWave link blockage due to human mobility; and 3) the multi-connectivity network behavior to preserve session continuity. In this paper, we construct this much needed methodology by combining the methods from queuing theory, stochastic geometry, as well as ray-based and system-level simulations. With this integrated framework, both user- and network-centric performance indicators together with their underlying scaling laws can be quantified in representative mmWave scenarios. To ensure modeling accuracy, the components of our methodology are carefully cross verified and calibrated against the current considerations in the standards. Building on this, a thorough comparison of alternative multi-connectivity strategies is conducted, as this paper reveals that even simpler multi-connectivity schemes bring notable improvements to session-level mmWave operation in realistic environments. These findings may become an important reference point for subsequent standardization in this area." ] }
1907.02361
2954969685
The 5G New Radio (NR) standard for wireless communications supports the millimetre-wave (mmWave) spectrum to yield unprecedented improvement of the access network capacity. However, intermittent blockages in the mmWave signal may degrade the system performance and lead to the under-utilisation of the allocated resources. To circumvent this problem, the transmission slot-time shall be adjusted according to the blockage condition, avoiding the resource under-utilisation. In this paper, we propose that the 5G NR flexible numerology should be applied to adapt the slot-time in order to mitigate the blockage effects. We validate this claim by analysing the expected data rate of a mmWave system, under a range of blockage scenarios. We show that different blockage scenarios may require different numerologies to produce best performance, and that the correct choice of numerology may improve this performance by as much as hundreds of Mbps. Our results carry insights important for the design of blockage-aware scheduling mechanisms for 5G.
In state-of-the-art flexible numerology has been applied to improve the network latency where the TTI is optimised according to a latency deadline restriction @cite_21 and according to the traffic pattern @cite_16 . Also, it has been applied to improve the frame spectral efficiency when multiplexing different types of services, e.g., eMBB and URLLC @cite_6 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_6", "@cite_16" ], "mid": [ "2963986572", "2587066115", "2907793396", "2899057481" ], "abstract": [ "We explore the potential of optimizing resource allocation with flexible numerology in frequency domain and variable frame structure in time domain, with services of with different types of requirements. We prove the NP-hardness of the problem and propose a scalable optimization algorithm based on linear programming and Lagrangian duality. Numerical results show significant advantages of adopting flexibility in both time and frequency domains for capacity enhancement and meeting the requirements of mission critical services.", "The advent of 5G will enable further diverse types of services to be supported, some of which have different requirements from the radio access architecture such as extreme low latency, ultra-high reliability and massive connectivity. With the scarcity of the available bandwidth, careful design of radio frame structures is essential in order to efficiently support the challenging requirements of 5G. The flexible numerology design of the PHY layer frame structure is a key tool in achieving low latency and high reliability. In this paper, we present a methodology to compute the optimal numerology for a given set of requirements from the PHY layer. It is shown that during low mobility and high SNR regimes, the choice of the numerology becomes more critical than at high mobility and low SNR regimes.", "The 5G New Radio (NR) access technology defines multiple numerologies to support a wide range of carrier frequencies, deployment scenarios, and variety of use cases. In this paper, we consider a resource allocation problem to efficiently support multiple numerologies simultaneously. We assume frequency division multiplexing (FDM) of numerologies in a time division duplex (TDD) system with a self-contained slot format. We focus on optimizing the numerology subband (SB) configuration, as well as the duplexing ratio between downlink (DL) and uplink (UL) directions within each SB. The optimization problem minimizes the weighted sum of the normalized load (NL) for each SB in each direction. We prove that our optimization problem is convex and, furthermore, we derive the optimal closed-form expressions for the numerology SB configuration and the DL-UL duplexing ratio per SB. The effectiveness of the proposed resource allocation is validated through an end-to-end ns-3 based simulator, which shows how the optimization of the NLs is translated into an improved throughput and delay performance.", "In this paper, we use a New Radio (NR) simulator, based on ns-3, to assess the impact of 5G NR numerologies on the end-to-end (E2E) latencies in a realistic and complex scenario, including TCP and UDP flows. As expected, we found that TCP goodput increases with the numerology, since a larger numerology allows reducing the round-trip-time. However, although counter-intuitive, simulation results exhibit that the E2E latency of uplink (UL) UDP flows may not be reduced with the numerology. In fact, it depends on two key factors and their relationship: the processing delays (fixed or numerologydependent) and the inter-packet arrival time, which depends on the UDP flow rate and the packet size. We demonstrate how, in some cases, the latency is worsened by an increasing signaling exchange that grows with the numerology. In particular, this is due to a handshake mechanism in UL (scheduling request and UL grant) that is performed each time a data packet encounters empty RLC buffers. For some combination of flow rate, packet size, and processing delays that are not numerology dependent, increasing the numerology may not reduce the E2E delay. Therefore, we conclude that the selection of the numerology in an NR system should be carefully made by taking into account the traffic patterns and the processing delays." ] }
1907.02288
2956093676
Is it possible to predict the affect of a user just by observing her behavioral interaction through a video? How can we, for instance, predict a user's arousal in games by merely looking at the screen during play? In this paper we address these questions by employing three dissimilar deep convolutional neural network architectures in our attempt to learn the underlying mapping between video streams of gameplay and the player's arousal. We test the algorithms in an annotated dataset of 50 gameplay videos of a survival shooter game and evaluate the deep learned models' capacity to classify high vs low arousal levels. Our key findings with the demanding leave-one-video-out validation method reveal accuracies of over 78 on average and 98 at best. While this study focuses on games and player experience as a test domain, the findings and methodology are directly relevant to any affective computing area, introducing a general and user-agnostic approach for modeling affect.
Videos have been at the core of interest for both eliciting and modeling emotions in affective computing @cite_29 . Typically, the video features a human face (or a group of faces) and emotion is modelled through the detection of facial cues (see @cite_43 @cite_0 @cite_25 among many) due to theoretical frameworks and evidence supporting that facial expressions can convey emotion @cite_34 @cite_24 @cite_28 . Beyond the facial expression of a subject, aspects such as the body posture @cite_4 @cite_26 , gestures @cite_42 or gait @cite_27 @cite_18 , have been used as input for modeling affect.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_28", "@cite_29", "@cite_42", "@cite_0", "@cite_43", "@cite_24", "@cite_27", "@cite_34", "@cite_25" ], "mid": [ "2519256581", "2136119880", "1869445247", "1966797434", "", "2142385019", "2159668072", "1968600824", "1964422354", "2069002413", "2108385997", "2962858109" ], "abstract": [ "Automatic emotion recognition is of great value in many applications, however, to fully display the application value of emotion recognition, more portable, non-intrusive, inexpensive technologies need to be developed. Human gaits could reflect the walker’s emotional state, and could be an information source for emotion recognition. This paper proposed a novel method to recognize emotional state through human gaits by using Microsoft Kinect, a low-cost, portable, camera-based sensor. Fifty-nine participants’ gaits under neutral state, induced anger and induced happiness were recorded by two Kinect cameras, and the original data were processed through joint selection, coordinate system transformation, sliding window gauss filtering, differential operation, and data segmentation. Features of gait patterns were extracted from 3-dimentional coordinates of 14 main body joints by Fourier transformation and Principal Component Analysis (PCA). The classifiers NaiveBayes, RandomForests, LibSVM and SMO (Sequential Minimal Optimization) were trained and evaluated, and the accuracy of recognizing anger and happiness from neutral state achieved 80.5 and 75.4 . Although the results of distinguishing angry and happiness states were not ideal in current study, it showed the feasibility of automatically recognizing emotional states from gaits, with the characteristics meeting the application requirements.", "The conveyance and recognition of affect and emotion partially determine how people interact with others and how they carry out and perform in their day-to-day activities. Hence, it is becoming necessary to endow technology with the ability to recognize users' affective states to increase the technologies' effectiveness. This paper makes three contributions to this research area. First, we demonstrate recognition models that automatically recognize affective states and affective dimensions from non-acted body postures instead of acted postures. The scenario selected for the training and testing of the automatic recognition models is a body-movement-based video game. Second, when attributing affective labels and dimension levels to the postures represented as faceless avatars, the level of agreement for observers was above chance level. Finally, with the use of the labels and affective dimension levels assigned by the observers as ground truth and the observers' level of agreement as base rate, automatic recognition models grounded on low-level posture descriptions were built and tested for their ability to generalize to new observers and postures using random repeated subsampling validation. The automatic recognition models achieve recognition percentages comparable to the human base rates as hypothesized.", "The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. Posture has been acknowledged as an important modality of affective communication in many fields. Behavioral studies have shown that posture can communicate discrete emotion categories as well as affective dimensions. In the affective computing field, while models for the automatic recognition of discrete emotion categories from posture have been proposed, to our knowledge, there are no models for the automatic recognition of affective dimensions from static posture. As a continuation of our previous study, the two main goals of this study are: i) to build automatic recognition models to discriminate between levels of affective dimensions based on low-level postural features; and ii) to investigate both the discriminative power and the limitations of the postural features proposed. The models were built on the basis of human observers' ratings of posture according to affective dimensions directly (instead of emotion category) in conjunction with our posture features.", "Abstract Emotions are viewed as having evolved through their adaptive value in dealing with fundamental life-tasks. Each emotion has unique features: signal, physiology, and antecedent events. Each emotion also has characteristics in common with other emotions: rapid onset, short duration, unbidden occurrence, automatic appraisal, and coherence among responses. These shared and unique characteristics are the product of our evolution, and distinguish emotions from other affective phenomena.", "", "This paper illustrates our recent work on the analysis of expressive gesture related to the motion of the upper body (the head and the hands) in the context of emotional portrayals performed by professional actors. An experiment is presented which is the result of a multidisciplinary joint work. The experiment aims at (i) developing models and algorithms for analysis of such expressive content (ii) individuating which motion cues are involved in conveying the actorpsilas expressive intentions to portray four emotions (anger, joy, relief, sadness) via a scenario approach. The paper discusses the experiment in detail with reference to related conceptual issues, developed techniques, and the obtained results.", "Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS). The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.", "We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different protoypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+ [1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1 when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80 . In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.", "", "The present study examined the potential for information provided in a person's style of walking to reveal certain emotions. Ten subjects observed five walkers expressing four different emotions and made emotion identifications as well as judgments about specific gait characteristics. Results revealed that subjects were able to identify sadness, anger, happiness, and pride from gait information at better than chance levels; however, identifications of pride were significantly less accurate than were identifications of sadness and anger. In addition, subjects' acuracy varied across the five walkers. Results also revealed that gait characteristics such as the amount of arm swing, stride length, heavyfootedness, and walking speed differentiated the emotions expressed by walkers.", "Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demons...", "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E." ] }
1907.02288
2956093676
Is it possible to predict the affect of a user just by observing her behavioral interaction through a video? How can we, for instance, predict a user's arousal in games by merely looking at the screen during play? In this paper we address these questions by employing three dissimilar deep convolutional neural network architectures in our attempt to learn the underlying mapping between video streams of gameplay and the player's arousal. We test the algorithms in an annotated dataset of 50 gameplay videos of a survival shooter game and evaluate the deep learned models' capacity to classify high vs low arousal levels. Our key findings with the demanding leave-one-video-out validation method reveal accuracies of over 78 on average and 98 at best. While this study focuses on games and player experience as a test domain, the findings and methodology are directly relevant to any affective computing area, introducing a general and user-agnostic approach for modeling affect.
Conventional machine learning methods have often been used for pattern recognition in images, videos and other data types, but have been held back by the requirement that raw data needed to be transformed to a suitable representation via a handcrafted feature construction process based on expert knowledge. The recent success of @cite_39 approaches is largely due to their ability to learn representations directly from the raw data via the composition of simple but nonlinear data transformations. Very complex functions can be learned by combining enough transformations, and deep learning has shown tremendous success in visual recognition @cite_2 , natural language processing @cite_5 and agent control @cite_8 .
{ "cite_N": [ "@cite_5", "@cite_8", "@cite_2", "@cite_39" ], "mid": [ "2742947407", "2145339207", "2163605009", "" ], "abstract": [ "Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "" ] }
1907.02288
2956093676
Is it possible to predict the affect of a user just by observing her behavioral interaction through a video? How can we, for instance, predict a user's arousal in games by merely looking at the screen during play? In this paper we address these questions by employing three dissimilar deep convolutional neural network architectures in our attempt to learn the underlying mapping between video streams of gameplay and the player's arousal. We test the algorithms in an annotated dataset of 50 gameplay videos of a survival shooter game and evaluate the deep learned models' capacity to classify high vs low arousal levels. Our key findings with the demanding leave-one-video-out validation method reveal accuracies of over 78 on average and 98 at best. While this study focuses on games and player experience as a test domain, the findings and methodology are directly relevant to any affective computing area, introducing a general and user-agnostic approach for modeling affect.
Player modeling is the study of computational models of players, their behavioral patterns and affective responses @cite_38 . If target outputs are available, a player model considers some input modality regarding the player (e.g. their gameplay and physiology) and is trained to predict aspects of the in-game behavior or the player experience. Indicatively, in studies with (Nintendo, 1985) gameplay data (e.g. number of deaths) combined with level features (e.g. number of gaps) @cite_10 , or the player's posture during gameplay @cite_31 were used to predict the player's reported affect.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_10" ], "mid": [ "", "2011974988", "2141104189" ], "abstract": [ "", "Estimating affective and cognitive states in conditions of rich human-computer interaction, such as in games, is a field of growing academic and commercial interest. Entertainment and serious games can benefit from recent advances in the field as, having access to predictors of the current state of the player (or learner) can provide useful information for feeding adaptation mechanisms that aim to maximize engagement or learning effects. In this paper, we introduce a large data corpus derived from 58 participants that play the popular Super Mario Bros platform game and attempt to create accurate models of player experience for this game genre. Within the view of the current research, features extracted both from player gameplay behavior and game levels, and player visual characteristics have been used as potential indicators of reported affect expressed as pairwise preferences between different game sessions. Using neuroevolutionary preference learning and automatic feature selection, highly accurate models of reported engagement, frustration, and challenge are constructed (model accuracies reach 91 , 92 , and 88 for engagement, frustration, and challenge, respectively). As a step further, the derived player experience models can be used to personalize the game level to desired levels of engagement, frustration, and challenge as game content is mapped to player experience through the behavioral and expressivity patterns of each player.", "This paper investigates the relationship between level design parameters of platform games, individual playing characteristics and player experience. The investigated design parameters relate to the placement and sizes of gaps in the level and the existence of direction changes; components of player experience include fun, frustration and challenge. A neural network model that maps between level design parameters, playing behavior characteristics and player reported emotions is trained using evolutionary preference learning and data from 480 platform game sessions. Results show that challenge and frustration can be predicted with a high accuracy (77.77 and 88.66 respectively) via a simple single-neuron model whereas model accuracy for fun (69.18 ) suggests the use of more complex non-linear approximators for this emotion. The paper concludes with a discussion on how the obtained models can be utilized to automatically generate game levels which will enhance player experience." ] }
1907.02288
2956093676
Is it possible to predict the affect of a user just by observing her behavioral interaction through a video? How can we, for instance, predict a user's arousal in games by merely looking at the screen during play? In this paper we address these questions by employing three dissimilar deep convolutional neural network architectures in our attempt to learn the underlying mapping between video streams of gameplay and the player's arousal. We test the algorithms in an annotated dataset of 50 gameplay videos of a survival shooter game and evaluate the deep learned models' capacity to classify high vs low arousal levels. Our key findings with the demanding leave-one-video-out validation method reveal accuracies of over 78 on average and 98 at best. While this study focuses on games and player experience as a test domain, the findings and methodology are directly relevant to any affective computing area, introducing a general and user-agnostic approach for modeling affect.
This study advances the state of the art in player modelling by using solely raw gameplay information to model a player's emotions. Within the broader area of artificial intelligence and games @cite_41 , the majority of the works that analyse and extract information from gameplay videos focus on inferring the strategy, structure and the physics of the games themselves @cite_12 @cite_8 . In this work, instead, we use the same kind of information for modelling a player's experience in a general fashion (from pixels to experience), ignoring the game per se. At the same time, the most common approaches for analysing player experience, besides game and gameplay information, heavily rely on direct measurements from players, such as face monitoring, speech and physiological signals; see e.g. @cite_11 @cite_31 @cite_37 ). Unlike these approaches, our methodology relies solely of gameplay video information. This critical difference advances player experience modelling as the approach does not require access to intrusive player measurements collected under well-defined experimental settings, thus allowing the vast collection of data. As gameplay videos are already available over the web and produced daily in massive amounts, the approach is feasible and can potentially generalize to any game.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_41", "@cite_31", "@cite_12", "@cite_11" ], "mid": [ "2103184652", "2145339207", "", "2011974988", "", "2884289434" ], "abstract": [ "More than 15 years after the early studies in Affective Computing (AC), [1] the problem of detecting and modeling emotions in the context of human-computer interaction (HCI) remains complex and largely unexplored. The detection and modeling of emotion is, primarily, the study and use of artificial intelligence (AI) techniques for the construction of computational models of emotion. The key challenges one faces when attempting to model emotion [2] are inherent in the vague definitions and fuzzy boundaries of emotion, and in the modeling methodology followed. In this context, open research questions are still present in all key components of the modeling process. These include, first, the appropriateness of the modeling tool employed to map emotional manifestations and responses to annotated affective states; second, the processing of signals that express these manifestations (i.e., model input); and third, the way affective annotation (i.e., model output) is handled. This paper touches upon all three key components of an affective model (i.e., input, model, output) and introduces the use of deep learning (DL) [3], [4], [5] methodologies for affective modeling from multiple physiological signals.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "", "Estimating affective and cognitive states in conditions of rich human-computer interaction, such as in games, is a field of growing academic and commercial interest. Entertainment and serious games can benefit from recent advances in the field as, having access to predictors of the current state of the player (or learner) can provide useful information for feeding adaptation mechanisms that aim to maximize engagement or learning effects. In this paper, we introduce a large data corpus derived from 58 participants that play the popular Super Mario Bros platform game and attempt to create accurate models of player experience for this game genre. Within the view of the current research, features extracted both from player gameplay behavior and game levels, and player visual characteristics have been used as potential indicators of reported affect expressed as pairwise preferences between different game sessions. Using neuroevolutionary preference learning and automatic feature selection, highly accurate models of reported engagement, frustration, and challenge are constructed (model accuracies reach 91 , 92 , and 88 for engagement, frustration, and challenge, respectively). As a step further, the derived player experience models can be used to personalize the game level to desired levels of engagement, frustration, and challenge as game content is mapped to player experience through the behavioral and expressivity patterns of each player.", "", "We consider the problem of automatic highlight-detection in video game streams. Currently, the vast majority of highlight-detection systems for games are triggered by the occurrence of hard-coded game events (e.g., score change, end-game), while most advanced tools and techniques are based on detection of highlights via visual analysis of game footage. We argue that in the context of game streaming, events that may constitute highlights are not only dependent on game footage, but also on social signals that are conveyed by the streamer during the play session (e.g., when interacting with viewers, or when commenting and reacting to the game). In this light, we present a multi-view unsupervised deep learning methodology for novelty-based highlight detection. The method jointly analyses both game footage and social signals such as the players facial expressions and speech, and shows promising results for generating highlights on streams of popular games such as Player Unknown's Battlegrounds." ] }
1907.02452
2956043269
This paper addresses the data-driven identification of latent dynamical representations of partially-observed systems, i.e., dynamical systems for which some components are never observed, with an emphasis on forecasting applications, including long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly learn an ODE (Ordinary Differential Equation) representation in the latent space and reconstructing latent states. Through numerical experiments, we demonstrate the relevance of the proposed framework w.r.t. state-of-the-art approaches in terms of short-term forecasting performance and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem.
The simplest example of an embedding is the case where our observation operator is an identity matrix. With such embedding we have direct access to the state variable @math which is governed by a deterministic ODE. This particular case has been widely studied in the literature, parametric representations based on augmented state formulations have been for decades the most popular models due to their simplicity and interpretability @cite_15 , @cite_25 . In the last years, neural network and deep learning have enriched the state-of-the-art of parametric representations @cite_2 , @cite_9 . In particular, the link between residual networks @cite_24 , @cite_16 and numerical integration schemes have opened new research avenues for learning extremely accurate dynamical models even from irregularly-sampled training data.
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_2", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "2782210340", "2963755523", "2788823228", "2060458267", "2937472180", "2239232218" ], "abstract": [ "The process of transforming observed data into predictive mathematical models of the physical world has always been paramount in science and engineering. Although data is currently being collected at an ever-increasing pace, devising meaningful models out of such observations in an automated fashion still remains an open problem. In this work, we put forth a machine learning approach for identifying nonlinear dynamical systems from data. Specifically, we blend classical tools from numerical analysis, namely the multi-step time-stepping schemes, with powerful nonlinear function approximators, namely deep neural networks, to distill the mechanisms that govern the evolution of a given data-set. We test the effectiveness of our approach for several benchmark problems involving the identification of complex, nonlinear and chaotic dynamics, and we demonstrate how this allows us to accurately learn the dynamics, forecast future states, and identify basins of attraction. In particular, we study the Lorenz system, the fluid flow behind a cylinder, the Hopf bifurcation, and the Glycoltic oscillator model as an example of complicated nonlinear dynamics typical of biological systems.", "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.", "Our work explores methods for the data-driven inference of temporal evolutions of physical functions with deep learning techniques. More specifically, we target fluid flow problems, and we propose a novel LSTM-based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. Key for arriving at a feasible algorithm is a technique for dimensionality reduction based on convolutional neural networks, as well as a special architecture for temporal prediction. We demonstrate that dense 3D+time functions of physics system can be predicted with neural networks, and we arrive at a neural-network based simulation algorithm with significant practical speed-ups. We demonstrate the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.", "In this paper, we propose a method to model nonlinear systems using polynomial nonlinear state space equations. Obtaining good initial estimates is a major problem in nonlinear modelling. It is solved here by identifying first the best linear approximation of the system under test. The proposed identification procedure is successfully applied to measurements of two physical systems.", "In this work, we investigate residual neural network representations for the identification and forecasting of dynamical systems. We propose a novel architecture that jointly learns the dynamical model and the associated Runge-Kutta integration scheme. We demonstrate the relevance of the proposed architecture with respect to learning-based state-of-the-art approaches in the identification and forecasting of chaotic dynamics when provided with training data with low temporal sampling rates.", "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing." ] }
1907.02266
2955698149
We give new partially-dynamic algorithms for the all-pairs shortest paths problem in weighted directed graphs. Most importantly, we give a new deterministic incremental algorithm for the problem that handles updates in @math total time (where the edge weights are from @math ) and explicitly maintains a @math -approximate distance matrix. For a fixed @math , this is the first deterministic partially dynamic algorithm for all-pairs shortest paths in directed graphs, whose update time is @math regardless of the number of edges. Furthermore, we also show how to improve the state-of-the-art partially dynamic randomized algorithms for all-pairs shortest paths [ STOC'02, Bernstein STOC'13] from Monte Carlo randomized to Las Vegas randomized without increasing the running time bounds (with respect to the @math notation). Our results are obtained by giving new algorithms for the problem of dynamically maintaining hubs, that is a set of @math vertices which hit a shortest path between each pair of vertices, provided it has hop-length @math . We give new subquadratic deterministic and Las Vegas algorithms for maintenance of hubs under either edge insertions or deletions.
The dynamic graph problems on digraphs are considerably harder than their counterparts on undirected graphs. An extreme example is the dynamic reachability problem, that is, transitive closure on directed graphs, and connectivity on undirected graphs. While there exist algorithms for undirected graphs with polylogarithmic query and update times @cite_18 @cite_27 @cite_40 @cite_0 @cite_29 , in the case of directed graphs the best known algorithm with polylogarithmic query time has an update time of @math @cite_32 @cite_39 @cite_30 . In addition, a combinatorial algorithm with an update time of @math is ruled out under Boolean matrix multiplication conjecture @cite_16 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_32", "@cite_39", "@cite_0", "@cite_27", "@cite_40", "@cite_16" ], "mid": [ "", "2045430818", "2395489635", "2099450374", "", "2102347446", "1886894033", "2010151376", "2146507303" ], "abstract": [ "", "Deterministic fully dynamic graph algorithms are presented for connectivity, minimum spanning tree, 2-edge connectivity, and biconnectivity. Assuming that we start with no edges in a graph with n vertices, the amortized operation costs are O(log2 n) for connectivity, O(log4 n) for minimum spanning forest, 2-edge connectivity, and O(log5 n) biconnectivity.", "The dynamic graph connectivity problem is the following: given a graph on a fixed set of n nodes which is undergoing a sequence of edge insertions and deletions, answer queries of the form q(a, b): \"Is there a path between nodes a and b?\" While data structures for this problem with polylogarithmic amortized time per operation have been known since the mid-1990's, these data structures have Θ(n) worst case time. In fact, no previously known solution has worst case time per operation which is o(√n). We present a solution with worst case times O(log4 n) per edge insertion, O(log5 n) per edge deletion, and O(log n log log n) per query. The answer to each query is correct if the answer is \"yes\" and is correct with high probability if the answer is \"no\". The data structure is based on a simple novel idea which can be used to quickly identify an edge in a cutset. Our technique can be used to simplify and significantly speed up the preprocessing time for the emergency planning problem while matching previous bounds for an update, and to approximate the sizes of cutsets of dynamic graphs in time O(min |S|, |V | ) for an oblivious adversary.", "We consider dynamic evaluation of algebraic functions such as computing determinant, matrix adjoint, matrix inverse and solving linear system of equations. We show that in the dynamic setup the above problems can be solved faster than evaluating everything from scratch. In the case when rows and columns of the matrix can change we show an algorithm that achieves O(n sup 2 ) arithmetic operations per update and O(1) arithmetic operations per query. Next, we describe two algorithms, with different tradeoffs, for updating the inverse and determinant when single entries of the matrix are changed. The fastest update for the first tradeoff is O(n sup 1.575 ) arithmetic operations per update and O(n sup 0.575 ) arithmetic operations per query. The second tradeoff gives O(n sup 1.495 ) arithmetic operations per update and O(n sup 1.495 ) arithmetic operations per query. We also consider the case when some number of columns or rows can change. We use dynamic determinant computations to solve the following problems in the dynamic setup: computing the number of spanning trees in a graph and testing if an edge in a graph is contained in some perfect matching. These are the first dynamic algorithms for these problems. Next, with the use of dynamic matrix inverse, we solve fully dynamic transitive closure in general directed graphs. The bounds on arithmetic operations for dynamic matrix inverse translate directly to time bounds for dynamic transitive closure. Thus we obtain the first known algorithm with O(n sup 2 ) worst-case update time and constant query time and two algorithms for transitive closure in general digraphs with subquadratic update and query times. Our algorithms for transitive closure are randomized with one-sided error. We also consider for the first time the case when the edges incident with a part of vertices of the graph can be changed.", "", "This paper solves a longstanding open problem in fully dynamic algorithms: We present the first fully dynamic algorithms that maintain connectivity, bipartiteness, and approximate minimum spanning trees in polylogarithmic time per edge insertion or deletion. The algorithms are designed using a new dynamic technique that combines a novel graph decomposition with randomization. They are Las-Vegas type randomized algorithms which use simple data structures and have a small constant factor. Let n denote the number of nodes in the graph. For a sequence of V(m0) operations, where m0 is the number of edges in the initial graph, the expected time for p updates is O( p log 3 n) (Throughout the paper the logarithms are base 2.) for connectivity and bipartiteness. The worst-case time for one query is O(log n log log n). For the k-edge witness problem (\"Does the removal of k given edges disconnect the graph?\") the expected time for p updates is O( p log 3 n) and the expected time for q queries is O(qk log 3 n). Given a graph with k different weights, the minimum spanning tree can be maintained during a sequence of p updates in expected time O( pk log 3 n). This implies an algorithm to maintain a 1 1 e-approximation of the minimum spanning tree in expected time O((p log 3 n log U) e) for p updates, where the weights of the edges are between 1 and U.", "We give new deterministic bounds for fully-dynamic graph connectivity. Our data structure supports updates (edge insertions deletions) in O(log2 n log log n) amortized time and connectivity queries in O(log n log log n) worst-case time, where n is the number of vertices of the graph. This improves the deterministic data structures of Holm, de Lichtenberg, and Thorup (STOC 1998, J. ACM 2001) and Thorup (STOC 2000) which both have O(log2 n) amortized update time and O(log n log log n) worst-case query time. Our model of computation is the same as that of Thorup, i.e., a pointer machine with standard AC0 instructions.", "In this paper we present near-optimal bounds for fullydynamic graph connectivity which is the most basic nontrivial fully-dynamic graph problem. Connectivity queries are supported in O(log n log log log n) time while the updates are supported in O(log n(log log n) 3) expected amortized time. The previous best update time was O((log n)2). Our new bound is only doubly-logarithmic factors from a general cell probe lower bound of f2(log n log log n). Our algorithm runs on a pointer machine, and uses only standard AC ° instructions. In our developments we make some comparatively trivial observations improving some deterministic bounds. The space bound of the previous O((log n) ) connectivity algorithm is improved from O(m + n log n) to O(m). The previous time complexity of fully-dynamic 2-edge and biconnectivity is improved from O((log n) 4) to O((log n) 3 log log n).", "We consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms: 1) Is the 3SUM problem on n numbers in O(n2 -- aepsi;) time for some aepsi; > 0? 2) Can one determine the satisfiability of a CNF formula on n variables and poly n clauses in O((2 -- aepsi;)npolyn) time for some aepsi; > 0? 3) Is the All Pairs Shortest Paths problem for graphs on n vertices in O(n3 -- aepsi;) time for some aepsi; > 0? 4) Is there a linear time algorithm that detects whether a given graph contains a triangle? 5) Is there an O(n3 -- aepsi;) time combinatorial algorithm for n × n Boolean matrix multiplication? The problems we consider include dynamic versions of bipartite perfect matching, bipartite maximum weight matching, single source reachability, single source shortest paths, strong connectivity, subgraph connectivity, diameter approximation and some nongraph problems such as Pagh's problem defined in a recent paper by pa#x0103;traa#x015F;cu [STOC 2010]." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Computational models of saliency prediction, a long standing problem in computer vision, have been studied from so many perspectives that going through all is beyond the scope of this manuscript. We, thus, provide a brief account of relevant works and summarize them in this section. We refer the readers to @cite_21 @cite_29 for an overview.
{ "cite_N": [ "@cite_29", "@cite_21" ], "mid": [ "2032007016", "2164084182" ], "abstract": [ "Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as “visual saliency.” Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.", "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
To date, from a computer vision perspective, we can divide the research on computational models of saliency prediction into two era (1) pre-deep learning, and (2) deep learning. During the pre-deep learning period, significant number of saliency models were introduced, e.g. @cite_28 @cite_20 @cite_30 @cite_24 @cite_23 , and numerous survey papers looked into these models and their properties, e.g. @cite_21 @cite_36 . During this period the community converged into adopting eye tracking as a medium for obtaining ground truth and dealt with challenges regarding the evaluation and the models, @cite_0 @cite_27 . This era was then replaced by saliency models based on deep learning techniques @cite_31 , which will be the main focus of this paper.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_36", "@cite_21", "@cite_24", "@cite_0", "@cite_27", "@cite_23", "@cite_31", "@cite_20" ], "mid": [ "1924619199", "", "1980711281", "2164084182", "2037328649", "2148383759", "2063608179", "2138046011", "2896657282", "2139047169" ], "abstract": [ "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "", "Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers free-view a natural scene has both scientific and economic impact. It has therefore attracted the attention from researchers in a wide range of science and engineering disciplines. With the ever increasing computational power, machine learning has become a popular tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. This paper reviews recent advances in learning saliency-based visual attention and discusses several key issues in this topic.", "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.", "We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.", "Visual saliency has been an increasingly active research area in the last ten years with dozens of saliency models recently published. Nowadays, one of the big challenges in the field is to find a way to fairly evaluate all of these models. In this paper, on human eye fixations, we compare the ranking of 12 state-of-the art saliency models using 12 similarity metrics. The comparison is done on Jian Li's database containing several hundreds of natural images. Based on Kendall concordance coefficient, it is shown that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks. On the other hand, other metrics provide a more diverse picture of models' overall performance. As a recommendation, three similarity metrics should be used to obtain a complete point of view of saliency model performance.", "Significant recent progress has been made in developing high-quality saliency models. However, less effort has been undertaken on fair assessment of these models, over large standardized datasets and correctly addressing confounding factors. In this study, we pursue a critical and quantitative look at challenges (e.g., center-bias, map smoothing) in saliency modeling and the way they affect model accuracy. We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scan path sequence. We also account for the role of map smoothing. We find that, although model rankings vary, some (e.g., AWS, LG, AIM, and HouNIPS) consistently outperform other models over all datasets. Some models work well for prediction of both fixation locations and scan path sequence (e.g., Judd, GBVS). Our results show low prediction accuracy for models over emotional stimuli from the NUSEF dataset. Our last benchmark, for the first time, gauges the ability of models to decode the stimulus category from statistics of fixations, saccades, and model saliency values at fixated locations. In this test, ITTI and AIM models win over other models. Our benchmark provides a comprehensive high-level picture of the strengths and weaknesses of many popular models, and suggests future research directions in saliency modeling.", "A novel Boolean Map based Saliency (BMS) model is proposed. An image is characterized by a set of binary images, which are generated by randomly thresholding the image's color channels. Based on a Gestalt principle of figure-ground segregation, BMS computes saliency maps by analyzing the topological structure of Boolean maps. BMS is simple to implement and efficient to run. Despite its simplicity, BMS consistently achieves state-of-the-art performance compared with ten leading methods on five eye tracking datasets. Furthermore, BMS is also shown to be advantageous in salient object detection.", "Visual saliency models have enjoyed a big leap in performance in recent years, thanks to advances in deep learning and large scale annotated data. Despite enormous effort and huge breakthroughs, however, models still fall short in reaching human-level accuracy. In this work, I explore the landscape of the field emphasizing on new deep saliency models, benchmarks, and datasets. A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets. Further, I identify factors that contribute to the gap between models and humans and discuss remaining issues that need to be addressed to build the next generation of more powerful saliency models. Some specific questions that are addressed include: in what ways current models fail, how to remedy them, what can be learned from cognitive studies of attention, how explicit saliency judgments relate to fixations, how to conduct fair model comparison, and what are the emerging applications of saliency models.", "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
The use of deep learning introduced new challenges to the community. The characteristics of most of the models shifted towards data intensive models based on deep convolutional neural networks (CNNs). To train a model, a huge amount of data is required; motivating the search for alternatives to eye tracking databases like mouse tracking @cite_45 , or pooling all the existing eye tracking databases into one @cite_1 .
{ "cite_N": [ "@cite_45", "@cite_1" ], "mid": [ "1934890906", "2472782738" ], "abstract": [ "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. This paper presents a new method to collect large-scale human data during natural explorations on images. While current datasets present a rich set of images and task-specific annotations such as category labels and object segments, this work focuses on recording and logging how humans shift their attention during visual exploration. The goal is to offer new possibilities to (1) complement task-specific annotations to advance the ultimate goal in visual understanding, and (2) understand visual attention and learn saliency models, all with human attentional data at a much larger scale. We designed a mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision, to simulate the natural viewing behavior of humans. The new paradigm allowed using a general-purpose mouse instead of an eye tracker to record viewing behaviors, thus enabling large-scale data collection. The paradigm was validated with controlled laboratory as well as large-scale online data. We report in this paper a proof-of-concept SALICON dataset of human “free-viewing” data on 10,000 images from the Microsoft COCO (MS COCO) dataset with rich contextual information. We evaluated the use of the collected data in the context of saliency prediction, and demonstrated them a good source as ground truth for the evaluation of saliency algorithms.", "In this paper we consider the problem of visual saliency modeling, including both human gaze prediction and salient object segmentation. The overarching goal of the paper is to identify high level considerations relevant to deriving more sophisticated visual saliency models. A deep learning model based on fully convolutional networks (FCNs) is presented, which shows very favorable performance across a wide variety of benchmarks relative to existing proposals. We also demonstrate that the manner in which training data is selected, and ground truth treated is critical to resulting model behaviour. Recent efforts have explored the relationship between human gaze and salient objects, and we also examine this point further in the context of FCNs. Close examination of the proposed and alternative models serves as a vehicle for identifying problems important to developing more comprehensive models going forward." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
To improve the training, Bruce al @cite_1 investigated the factors required to take into account when relying on deep models, , pre-processing steps, tricks for pooling all the eye tracking databases together and other nuances of training a deep model. Authors, however, considered only one loss function in their study.
{ "cite_N": [ "@cite_1" ], "mid": [ "2472782738" ], "abstract": [ "In this paper we consider the problem of visual saliency modeling, including both human gaze prediction and salient object segmentation. The overarching goal of the paper is to identify high level considerations relevant to deriving more sophisticated visual saliency models. A deep learning model based on fully convolutional networks (FCNs) is presented, which shows very favorable performance across a wide variety of benchmarks relative to existing proposals. We also demonstrate that the manner in which training data is selected, and ground truth treated is critical to resulting model behaviour. Recent efforts have explored the relationship between human gaze and salient objects, and we also examine this point further in the context of FCNs. Close examination of the proposed and alternative models serves as a vehicle for identifying problems important to developing more comprehensive models going forward." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Tavakoli al @cite_4 looked into the correlation between mouse tracking and eye tracking at finer details, showing the data from the two modalities are not exactly the same. They demonstrated that, while mouse tracking is useful for training a deep model, it is less reliable for model selection and evaluation in particular when the evaluation standards are based on eye tracking.
{ "cite_N": [ "@cite_4" ], "mid": [ "2619291653" ], "abstract": [ "This paper revisits visual saliency prediction by evaluating the recent advancements in this field such as crowd-sourced mouse tracking-based databases and contextual annotations. We pursue a critical and quantitative approach towards some of the new challenges including the quality of mouse tracking versus eye tracking for model training and evaluation. We extend quantitative evaluation of models in order to incorporate contextual information by proposing an evaluation methodology that allows accounting for contextual factors such as text, faces, and object attributes. The proposed contextual evaluation scheme facilitates detailed analysis of models and helps identify their pros and cons. Through several experiments, we find that (1) mouse tracking data has lower inter-participant visual congruency and higher dispersion, compared to the eye tracking data, (2) mouse tracking data does not totally agree with eye tracking in general and in terms of different contextual regions in specific, and (3) mouse tracking data leads to acceptable results in training current existing models, and (4) mouse tracking data is less reliable for model selection and evaluation. The contextual evaluation also reveals that, among the studied models, there is no single model that performs best on all the tested annotations." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Given the sudden boost in overall performance by saliency models using deep learning techniques, Bylinskii al @cite_33 reevaluated the existing benchmarks and looked into the factors influencing the performance of models in a finer detail. They quantified the remaining gap between models and human. They argued that pushing performance further will require high-level image understanding.
{ "cite_N": [ "@cite_33" ], "mid": [ "2520859141" ], "abstract": [ "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a fine-grained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60 of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Recently Sen al @cite_44 investigated the effect of model training on neuron representations inside a deep saliency model. They demonstrated that (1) some visual regions are more salient than others, and (2) the change in inner-representations is due to the task that original model is trained on prior to being fine-tuned for saliency.
{ "cite_N": [ "@cite_44" ], "mid": [ "2920331547" ], "abstract": [ "Recently, data-driven deep saliency models have achieved high performance and have outperformed classical saliency models, as demonstrated by results on datasets such as the MIT300 and SALICON. Yet, there remains a large gap between the performance of these models and the inter-human baseline. Some outstanding questions include what have these models learned, how and where they fail, and how they can be improved. This article attempts to answer these questions by analyzing the representations learned by individual neurons located at the intermediate layers of deep saliency models. To this end, we follow the steps of existing deep saliency models, that is borrowing a pre-trained model of object recognition to encode the visual features and learning a decoder to infer the saliency. We consider two cases when the encoder is used as a fixed feature extractor and when it is fine-tuned, and compare the inner representations of the network. To study how the learned representations depend on the task, we fine-tune the same network using the same image set but for two different tasks: saliency prediction versus scene classification. Our analyses reveal that: 1) some visual regions (e.g. head, text, symbol, vehicle) are already encoded within various layers of the network pre-trained for object recognition, 2) using modern datasets, we find that fine-tuning pre-trained models for saliency prediction makes them favor some categories (e.g. head) over some others (e.g. text), 3) although deep models of saliency outperform classical models on natural images, the converse is true for synthetic stimuli (e.g. pop-out search arrays), an evidence of significant difference between human and data-driven saliency models, and 4) we confirm that, after-fine tuning, the change in inner-representations is mostly due to the task and not the domain shift in the data." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
The deep saliency models fall into two categories, (1) those using CNNs as a fixed feature extractors and learn a regression from feature space into saliency space using a none-neural technique, and (2) those that train a deep saliency model end-to-end. The number of models belonging to the first category is limited. They are not comparable within the context of this research because the regression is often carried out such that the error can not be back-propagated, , @cite_19 employs support vector machines and @cite_6 uses extreme learning machines. Our focus is, however, the second group.
{ "cite_N": [ "@cite_19", "@cite_6" ], "mid": [ "2078903912", "2533058588" ], "abstract": [ "Saliency prediction typically relies on hand-crafted (multiscale) features that are combined in different ways to form a \"master\" saliency map, which encodes local image conspicuity. Recent improvements to the state of the art on standard benchmarks such as MIT1003 have been achieved mostly by incrementally adding more and more hand-tuned features (such as car or face detectors) to existing models. In contrast, we here follow an entirely automatic data-driven approach that performs a large-scale search for optimal features. We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency. Because of the high dimensionality of this parameter space, we use automated hyperparameter optimization to efficiently guide the search. The optimal blend of such multilayer features combined with a simple linear classifier achieves excellent performance on several image saliency benchmarks. Our models outperform the state of the art on MIT1003, on which features and classifiers are learned. Without additional training, these models generalize well to two other image saliency data sets, Toronto and NUSEF, despite their different image content. Finally, our algorithm scores best of all the 23 models evaluated to date on the MIT300 saliency challenge, which uses a hidden test set to facilitate an unbiased comparison.", "This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Within end-to-end deep learning techniques, the main research has been on architecture design. Many of the models borrow the pre-trained weights of an image recognition network and experiment combining different layers in various ways. In other words, they engineer an encoder-decoder network that combines a selected set of features from different layers of a recognition network. In the following we discuss some of the most well-known models. Huang al @cite_42 proposed a multi-scale encoder based on VGG networks and learns a linear combination from responses of two scales (fine and coarse). K " u mmerer al @cite_2 use a single scale model using features from multiple layers of AlexNet. Similarly, K " u mmerer al @cite_10 and Cornia al @cite_11 employed single scale models with features from multiple layers of a VGG architecture.
{ "cite_N": [ "@cite_42", "@cite_10", "@cite_11", "@cite_2" ], "mid": [ "2212216676", "2738450183", "2963828885", "2964145162" ], "abstract": [ "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.", "Understanding where people look in images is an important problem in computer vision. Despite significant research, it remains unclear to what extent human fixations can be predicted by low-level (contrast) compared to highlevel (presence of objects) image features. Here we address this problem by introducing two novel models that use different feature spaces but the same readout architecture. The first model predicts human fixations based on deep neural network features trained on object recognition. This model sets a new state-of-the art in fixation prediction by achieving top performance in area under the curve metrics on the MIT300 hold-out benchmark (AUC = 88 , sAUC = 77 , NSS = 2.34). The second model uses purely low-level (isotropic contrast) features. This model achieves better performance than all models not using features pretrained on object recognition, making it a strong baseline to assess the utility of high-level features. We then evaluate and visualize which fixations are better explained by lowlevel compared to high-level image features. Surprisingly we find that a substantial proportion of fixations are better explained by the simple low-level model than the stateof- the-art model. Comparing different features within the same powerful readout architecture allows us to better understand the relevance of low- versus high-level features in predicting fixation locations, while simultaneously achieving state-of-the-art saliency prediction.", "This paper presents a novel deep architecture for saliency prediction. Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps. We propose an architecture which, instead, combines features extracted at different levels of a Convolutional Neural Network (CNN). Our model is composed of three main blocks: a feature extraction CNN, a feature encoding network, that weights low and high level feature maps, and a prior learning network. We compare our solution with state of the art saliency models on two public benchmarks datasets. Results show that our model outperforms under all evaluation metrics on the SALICON dataset, which is currently the largest public dataset for saliency prediction, and achieves competitive results on the MIT300 benchmark. Code is available at https: github.com marcellacornia mlnet.", "" ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
There has been also a wave of models incorporating recurrent neural architectures. Han and Liu @cite_43 proposed a multi-scale architecture using convolutional long-short-term memory (ConvLSTM). It is followed by @cite_38 using a slight modified architecture using multiple layers in the encoder and a different loss function. Recurrent models of saliency prediction are more complex than feed-forward models and more difficult to train. Moreover, their performance is not yet significantly better than some recent feed-forward networks such as EML-NET @cite_17 .
{ "cite_N": [ "@cite_43", "@cite_17", "@cite_38" ], "mid": [ "2528092473", "2798322161", "2558906385" ], "abstract": [ "Traditional saliency models usually adopt hand-crafted image features and human-designed mechanisms to calculate local or global contrast. In this paper, we propose a novel computational saliency model, i.e., deep spatial contextual long-term recurrent convolutional network (DSCLRCN), to predict where people look in natural scenes. DSCLRCN first automatically learns saliency related local features on each image location in parallel. Then, in contrast with most other deep network based saliency models which infer saliency in local contexts, DSCLRCN can mimic the cortical lateral inhibition mechanisms in human visual system to incorporate global contexts to assess the saliency of each image location by leveraging the deep spatial long short-term memory (DSLSTM) model. Moreover, we also integrate scene context modulation in DSLSTM for saliency inference, leading to a novel deep spatial contextual LSTM (DSCLSTM) model. The whole network can be trained end-to-end and works efficiently when testing. Experimental results on two benchmark datasets show that DSCLRCN can achieve state-of-the-art performance on saliency detection. Furthermore, the proposed DSCLSTM model can significantly boost the saliency detection performance by incorporating both global spatial interconnections and scene context modulation, which may uncover novel inspirations for studies on them in computational saliency models.", "In this work, we apply state-of-the-art Convolutional Neural Network(CNN) architectures for saliency prediction. Our results show that better saliency features can be delivered by a deeper CNN model. However, it is very space-consuming to apply a complex model due to the large size of input images. The space complexity becomes even more problematic when we extract features from multiple convolutional layers or different models. In this paper, we propose a modular saliency system which aims at splitting the whole network into small modules. The main difference in our approach s that the encoder and decoder can be separately trained for the scalability. Furthermore, the encoder can contain more than one CNN model to extract features and the models can have different architectures or pre-trained on different datasets. This parallel design allows us to better utilize the computational space in order to apply more powerful encoder. More importantly, our network can be easily expanded almost without extra spaces, other pre-trained CNN models can be combined for a wider range of visual knowledge. We denote our expandable multi-layer network as EML-NET in this paper. Our method is evaluated on three public saliency benchmarks, SALICON, MIT300 and CAT2000. The proposed EML-NET achieves state-of-the-art results on the metric of Normalized Scanpath Saliency using a modified loss function.", "Data-driven saliency has recently gained a lot of attention thanks to the use of convolutional neural networks for predicting gaze fixations. In this paper, we go beyond standard approaches to saliency prediction, in which gaze maps are computed with a feed-forward network, and present a novel model which can predict accurate saliency maps by incorporating neural attentive mechanisms. The core of our solution is a convolutional long short-term memory that focuses on the most salient regions of the input image to iteratively refine the predicted saliency map. In addition, to tackle the center bias typical of human eye fixations, our model can learn a set of prior maps generated with Gaussian functions. We show, through an extensive evaluation, that the proposed architecture outperforms the current state-of-the-art on public saliency prediction datasets. We further study the contribution of each key component to demonstrate their robustness on different scenarios." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
In the literature of deep saliency models, a loss function or a combination of several ones is chosen based on intuition, expertise of the authors or sometimes mathematical formulation of a model. K " u mmerer al @cite_26 introduces the idea that information-theory can be a good inspiration for saliency metrics. They use the information gain to explain how well a model performs compared to a gold-standard baseline. Consequently, they use the log-likelihood for a loss function in @cite_5 , achieving state-of-the-art results in saliency prediction. Jetley al @cite_34 are part of the very few who specifically focused on the design of a loss functions for saliency models. They proposed the use of Bhattacharyya distance and compared it to 4 other probability distances. In this paper, in contrast to @cite_34 , we (1) adopt a principled approach to compare existing loss functions and their combinations and (2) investigate their convergence properties over different datasets and network architectures.
{ "cite_N": [ "@cite_5", "@cite_26", "@cite_34" ], "mid": [ "2529173830", "", "2442293398" ], "abstract": [ "Here we present DeepGaze II, a model that predicts where people look in images. The model uses the features from the VGG-19 deep neural network trained to identify objects in images. Contrary to other saliency models that use deep features, here we use the VGG features for saliency prediction with no additional fine-tuning (rather, a few readout layers are trained on top of the VGG features to predict saliency). The model is therefore a strong test of transfer learning. After conservative cross-validation, DeepGaze II explains about 87 of the explainable information gain in the patterns of fixations and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark. These results corroborate the finding from DeepGaze I (which explained 56 of the explainable information gain), that deep features trained on object recognition provide a versatile feature space for performing related visual tasks. We explore the factors that contribute to this success and present several informative image examples. A web service is available to compute model predictions at this http URL.", "", "Most saliency estimation methods aim to explicitly model low-level conspicuity cues such as edges or blobs and may additionally incorporate top-down cues using face or text detection. Data-driven methods for training saliency models using eye-fixation data are increasingly popular, particularly with the introduction of large-scale datasets and deep architectures. However, current methods in this latter paradigm use loss functions designed for classification or regression tasks whereas saliency estimation is evaluated on topographical maps. In this work, we introduce a new saliency map model which formulates a map as a generalized Bernoulli distribution. We then train a deep architecture to predict such maps using novel loss functions which pair the softmax activation function with measures designed to compute distances between probability distributions. We show in extensive experiments the effectiveness of such loss functions over standard ones on four public benchmark datasets, and demonstrate improved performance over state-of-the-art saliency methods." ] }
1907.02336
2954329185
Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
With the application of deep learning techniques to computer vision domain, the choice of appropriate loss function for a task has become a critical aspect of the model training. The computer vision community have been successful in developing task tailored loss functions to improve a model, , encoding various geometric properties for pose estimation @cite_35 , curating loss functions enforcing perceptual properties of vision for various generative deep models @cite_22 , exploiting the sparsity within the structure of problem, , class imbalanced between background and foreground in detection problem, for reshaping standard loss functions and form a new effective loss functions @cite_8 . Our efforts follows the same path to identify the effectiveness of a range of loss functions in saliency prediction.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_8" ], "mid": [ "2605111497", "2950689937", "2743473392" ], "abstract": [ "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL" ] }
1907.02251
2954642157
Consider collections @math and @math of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from @math that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity @math for @math . We consider the approximate version of the problem where we are given thresholds @math and wish to return a pair from @math that has Jaccard similarity higher than @math if there exists a pair in @math with Jaccard similarity at least @math . The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of , solves this problem in @math time if @math . In particular, for @math , the approximation ratio @math increases polynomially in @math . In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in @math time for @math . Specifically, assuming OVC, we prove that for any @math there exists an @math such that Bichromatic Closest Pair with Jaccard similarity requires time @math for any choice of thresholds @math , that satisfy @math .
Similarity search can be performed in several ways -- a popular technique is Locality Sensitive Hashing (LSH) @cite_4 which attempts to collect similar items in buckets in order to reduce the number of sets needed to check similarity against. We can for example use Broder's MinHash @cite_0 with locality sensitive hashing to solve Bichromatic Closest Pair with Jaccard similarity in time @math when @math for any @math . This is done by ensuring that the collision probability for pairs with similarity @math is @math and the collision probability for pairs with similarity @math is @math . Hashing @math times means that we find a pair with similarity @math if one exists. The ChosenPath method presented in @cite_9 also uses the LSH framework to solve Bichromatic Closest Pair with Braun-Blanquet similarity in time @math for thresholds @math .
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4" ], "mid": [ "", "2574633002", "2147717514" ], "abstract": [ "", "We consider the problem of approximate set similarity search under Braun-Blanquet similarity B(x, y) = |x ∩ y| max(|x|, |y|). The (b1, b2)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x E P with B(q, x) ≥ b1, then we can efficiently return x′ E P with B(q, x′) > b2. We present a simple data structure that solves this problem with space usage O(n1+ρlogn + ∑x e P|x|) and query time O(|q|nρ logn) where n = |P| and ρ = log(1 b1) log(1 b2). Making use of existing lower bounds for locality-sensitive hashing by O' (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder's MinHash (CCS 1997) for Jaccard similarity and 's cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015).", "We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers." ] }
1907.02251
2954642157
Consider collections @math and @math of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from @math that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity @math for @math . We consider the approximate version of the problem where we are given thresholds @math and wish to return a pair from @math that has Jaccard similarity higher than @math if there exists a pair in @math with Jaccard similarity at least @math . The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of , solves this problem in @math time if @math . In particular, for @math , the approximation ratio @math increases polynomially in @math . In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in @math time for @math . Specifically, assuming OVC, we prove that for any @math there exists an @math such that Bichromatic Closest Pair with Jaccard similarity requires time @math for any choice of thresholds @math , that satisfy @math .
The proof of Theorem will be based on a result by Rubinstein @cite_3 : Assuming the Orthogonal Vectors Conjecture, a @math -approximation to Bichromatic Closest Pair with Hamming, Edit or Euclidean distance requires time @math . The required approximation factor @math depends on @math , and tends to 1 as @math tends to zero. We translate this into an equivalent conditional lower bound for Jaccard similarity for certain constants @math and @math .
{ "cite_N": [ "@cite_3" ], "mid": [ "2963964051" ], "abstract": [ "We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every δ>0 there exists a constant e>0 such that computing a (1+e)-approximation to the Bichromatic Closest Pair requires Ω(n2−δ) time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time. Our reduction uses the recently introduced Distributed PCP framework, but obtains improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG codes have been constructed in other settings before, but our construction is the first to yield new hardness results." ] }
1907.02251
2954642157
Consider collections @math and @math of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from @math that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity @math for @math . We consider the approximate version of the problem where we are given thresholds @math and wish to return a pair from @math that has Jaccard similarity higher than @math if there exists a pair in @math with Jaccard similarity at least @math . The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of , solves this problem in @math time if @math . In particular, for @math , the approximation ratio @math increases polynomially in @math . In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in @math time for @math . Specifically, assuming OVC, we prove that for any @math there exists an @math such that Bichromatic Closest Pair with Jaccard similarity requires time @math for any choice of thresholds @math , that satisfy @math .
In order to handle smaller subconstant values of @math and @math we use a technique that we call squaring, which allows us to increase the gap in similarities between pairs with high Jaccard similarity and pairs with low Jaccard similarity by computing the cartesian product of a binary vector with itself. A similar technique is used in @cite_2 by Valiant. His technique is called and is used to amplify the gap between small and large inner products of vectors. We also see a similar technique in the LSH framework with MinHash, where we use concatenation of hash values (which are sampled set elements) to amplify the difference in collision probability, and hence in the Jaccard similarity.
{ "cite_N": [ "@cite_2" ], "mid": [ "2263882035" ], "abstract": [ "Given a set of n d-dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson correlation coefficient ρ (Hamming distance dċ 1−ρf2), how quickly can one find the two correlated vectorsq We present an algorithm which, for any constant e>0, and constant ρ>0, runs in expected time O(n5mωf4mωpe pnd) Applications and extensions of this basic algorithm yield significantly improved algorithms for several other problems. Approximate Closest Pair. For any sufficiently small constant e>0, given n d-dimensional vectors, there exists an algorithm that returns a pair of vectors whose Euclidean (or Hamming) distance differs from that of the closest pair by a factor of at most 1pe, and runs in time O(n2mΘ(se)). The best previous algorithms (including Locality Sensitive Hashing) have runtime O(n2mO(e)). Learning Sparse Parities with Noise. Given samples from an instance of the learning parities with noise problem where each example has length n, the true parity set has size at most k « n, and the noise rate is η, there exists an algorithm that identifies the set of k indices in time nωpef3 k poly(1f1m2η) 0.4), improves upon the results of [2011] that give a runtime of n(1p(2 η)2 p o(1))kf2 poly(1f1m2η). Learning k-Juntas with Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of just k « n of the bits, perturbed by noise rate η, return the set of relevant indices. Leveraging the reduction of [2009], our result for learning k-parities implies an algorithm for this problem with runtime nωpef3 k poly(1f1m2η) Learning k-Juntas without Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of k « n of the bits, return the set of relevant indices. Using a modification of the algorithm of [2004], and employing our algorithm for learning sparse parities with noise via the reduction of [2009], we obtain an algorithm for this problem with runtime nωp ef4 k poly(n)" ] }
1907.02251
2954642157
Consider collections @math and @math of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from @math that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity @math for @math . We consider the approximate version of the problem where we are given thresholds @math and wish to return a pair from @math that has Jaccard similarity higher than @math if there exists a pair in @math with Jaccard similarity at least @math . The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of , solves this problem in @math time if @math . In particular, for @math , the approximation ratio @math increases polynomially in @math . In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in @math time for @math . Specifically, assuming OVC, we prove that for any @math there exists an @math such that Bichromatic Closest Pair with Jaccard similarity requires time @math for any choice of thresholds @math , that satisfy @math .
Combining two simple reductions with the above squaring we show that for any @math , we can always find @math such that Bichromatic Closest Pair with Jaccard similarity cannot be solved in time @math for any pair @math when @math . Contrast this with the above LSH upper bound of @math for @math . We also know that there are parts of the parameter space where @math that can be solved in @math time, see the discussion in @cite_9 . While LSH with MinHash is not the fastest possible algorithm in terms of the exponent achieved, it has been unclear how far from optimal it might be.
{ "cite_N": [ "@cite_9" ], "mid": [ "2574633002" ], "abstract": [ "We consider the problem of approximate set similarity search under Braun-Blanquet similarity B(x, y) = |x ∩ y| max(|x|, |y|). The (b1, b2)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x E P with B(q, x) ≥ b1, then we can efficiently return x′ E P with B(q, x′) > b2. We present a simple data structure that solves this problem with space usage O(n1+ρlogn + ∑x e P|x|) and query time O(|q|nρ logn) where n = |P| and ρ = log(1 b1) log(1 b2). Making use of existing lower bounds for locality-sensitive hashing by O' (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder's MinHash (CCS 1997) for Jaccard similarity and 's cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015)." ] }
1907.02364
2947492009
By borrowing the wisdom of human in gaze following, we propose a two-stage solution for gaze point prediction of the target persons in a scene. Specifically, in the first stage, both head image and its position are fed into a gaze direction pathway to predict the gaze direction, and then multi-scale gaze direction fields are generated to characterize the distribution of gaze points without considering the scene contents. In the second stage, the multi-scale gaze direction fields are concatenated with the image contents and fed into a heatmap pathway for heatmap regression. There are two merits for our two-stage solution based gaze following: i) our solution mimics the behavior of human in gaze following, therefore it is more psychological plausible; ii) besides using heatmap to supervise the output of our network, we can also leverage gaze direction to facilitate the training of gaze direction pathway, therefore our network can be more robustly trained. Considering that existing gaze following dataset is annotated by the third-view persons, we build a video gaze following dataset, where the ground truth is annotated by the observers in the videos. Therefore it is more reliable. The evaluation with such a dataset reflects the capacity of different methods in real scenarios better. Extensive experiments on both datasets show that our method significantly outperforms existing methods, which validates the effectiveness of our solution for gaze following. Our dataset and codes are released in this https URL.
Previous work about gaze following paid attention to restricted scenes, which added some priors for specific applications. In @cite_30 , a face detector was employed to extract face, which was limited for the people looking away from the camera. @cite_19 detected whether people were looking at each other in a movie, which was helpful for interaction. Eye tracker was utilized to predict the next object in order to improve action recognition in @cite_29 . @cite_32 only estimated the gaze direction from head position, but not the specific gaze point. These methods were applied to a particular scene. Recent works @cite_13 @cite_2 @cite_21 focused on general gaze following, which had wider applications. Given a single picture containing one or more people, the gaze points of some people in the image were estimated, without any restrictions in @cite_13 . Some extensive works @cite_2 @cite_21 focused on multi-modality image or predicted gaze point in videos. The RGB-D image was introduced to predict gaze in images and videos @cite_2 because the multi-modality data provided 3D head pose information in order to find more accurate gaze point. In @cite_21 , the cross-frame gaze point in videos could be predicted for the people in a frame.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_21", "@cite_32", "@cite_19", "@cite_2", "@cite_13" ], "mid": [ "2047508432", "2212494831", "2776312359", "1994463521", "1971029019", "1896788142", "" ], "abstract": [ "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.", "Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze in video by predicting where a person (in the video) is looking even when the object is in a different frame. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one frame with a person in it, our model estimates a density for gaze location in every frame and the probability that the person is looking in that particular frame. A key aspect of our approach is an end-to-end model that jointly estimates: saliency, gaze pose, and geometric relationships between views while only using gaze as supervision. Visualizations suggest that the model learns to internally solve these intermediate tasks automatically without additional supervision. Experiments show that our approach follows gaze in video better than existing approaches, enabling a richer understanding of human activities in video.", "Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor’s head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor’s head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers’ fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.", "The objective of this work is to determine if people are interacting in TV video by detecting whether they are looking at each other or not. We determine both the temporal period of the interaction and also spatially localize the relevant people. We make the following four contributions: (i) head detection with implicit coarse pose information (front, profile, back); (ii) continuous head pose estimation in unconstrained scenarios (TV video) using Gaussian process regression; (iii) propose and evaluate several methods for assessing whether and when pairs of people are looking at each other in a video shot; and (iv) introduce new ground truth annotation for this task, extending the TV human interactions dataset (Patron- 2010) The performance of the methods is evaluated on this dataset, which consists of 300 video clips extracted from TV shows. Despite the variety and difficulty of this video material, our best method obtains an average precision of 87.6 in a fully automatic manner.", "In this paper we present a convolutional neural network (CNN)-based model for human head pose estimation in low-resolution multi-modal RGB-D data. We pose the problem as one of classification of human gazing direction. We further fine-tune a regressor based on the learned deep classifier. Next we combine the two models (classification and regression) to estimate approximate regression confidence. We present state-of-the-art results in datasets that span the range of high-resolution human robot interaction (close up faces plus depth information) data to challenging low resolution outdoor surveillance data. We build upon our robust head-pose estimation and further introduce a new visual attention model to recover interaction with the environment . Using this probabilistic model, we show that many higher level scene understanding like human-human scene interaction detection can be achieved. Our solution runs in real-time on commercial hardware.", "" ] }
1907.02364
2947492009
By borrowing the wisdom of human in gaze following, we propose a two-stage solution for gaze point prediction of the target persons in a scene. Specifically, in the first stage, both head image and its position are fed into a gaze direction pathway to predict the gaze direction, and then multi-scale gaze direction fields are generated to characterize the distribution of gaze points without considering the scene contents. In the second stage, the multi-scale gaze direction fields are concatenated with the image contents and fed into a heatmap pathway for heatmap regression. There are two merits for our two-stage solution based gaze following: i) our solution mimics the behavior of human in gaze following, therefore it is more psychological plausible; ii) besides using heatmap to supervise the output of our network, we can also leverage gaze direction to facilitate the training of gaze direction pathway, therefore our network can be more robustly trained. Considering that existing gaze following dataset is annotated by the third-view persons, we build a video gaze following dataset, where the ground truth is annotated by the observers in the videos. Therefore it is more reliable. The evaluation with such a dataset reflects the capacity of different methods in real scenarios better. Extensive experiments on both datasets show that our method significantly outperforms existing methods, which validates the effectiveness of our solution for gaze following. Our dataset and codes are released in this https URL.
Eye tracking is strongly related to gaze following. Different from gaze following, eye tracking technology inferred which direction or which point on the screen one person was looking at @cite_0 . Previous work @cite_27 @cite_31 built the geometry model to infer the gaze point on the screen target. Recently, many appearance-based methods @cite_26 @cite_20 solved the problem by learning a complex function from the eye images to gaze point, which needed large-scale dataset. These methods took the eye images and face image as inputs because gaze direction could be determined according to the eye movement and head pose @cite_20 . However, the eye images could not be utilized to predict gaze point because they were occluded or very noisy in gaze following. Thus, gaze following direction is almost obtained from the head image.
{ "cite_N": [ "@cite_26", "@cite_0", "@cite_27", "@cite_31", "@cite_20" ], "mid": [ "2952055246", "2027879843", "2130313210", "2010854031", "2778268008" ], "abstract": [ "From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at this http URL", "Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.", "Most available remote eye gaze trackers based on pupil center corneal reflection (PCCR) technique have two characteristics that prevent them from being widely used as an important computer input device for human computer interaction. First, they must often be calibrated repeatedly for each individual; second, they have low tolerance for head movements and require the user to hold the head uncomfortably still. In this paper, we propose a solution for the classical PCCR technique that simplify the calibration procedure and allow free head movements. The core of our method is to analytically obtain a head mapping function to compensate head movement. Specifically, the head mapping function allows to automatically map the eye movement measurement under an arbitrary head position to a reference head position so that the gaze can be estimated from the mapped eye measurement with respect to the reference head position. Furthermore, our method minimizes the calibration procedure to only one time for each individual. Our proposed method significantly improves the usability of the eye gaze tracking technology, which is a major step for eye tracker to be accepted as a natural computer input device.", "Eye-gaze as a form of human machine interface holds great promise for improving the way we interact with machines. Eye-gaze tracking devices that are non-contact, non-restrictive, accurate and easy to use will increase the appeal for including eye-gaze information in future applications. The system we have developed and which we describe in this paper achieves these goals using a single high resolution camera with a fixed field of view. The single camera system has no moving parts which results in rapid reacquisition of the eye after loss of tracking. Free head motion is achieved using multiple glints and 3D modeling techniques. Accuracies of under 1° of visual angle are achieved over a field of view of 14x12x20 cm and over various hardware configurations, camera resolutions and frame rates.", "Free-head 3D gaze tracking outputs both the eye location and the gaze vector in 3D space, and it has wide applications in scenarios such as driver monitoring, advertisement analysis and surveillance. A reliable and low-cost monocular solution is critical for pervasive usage in these areas. Noticing that a gaze vector is a composition of head pose and eyeball movement in a geometrically deterministic way, we propose a novel gaze transform layer to connect separate head pose and eyeball movement models. The proposed decomposition does not suffer from head-gaze correlation overfitting and makes it possible to use datasets existing for other tasks. To add stronger supervision for better network training, we propose a two-step training strategy, which first trains sub-tasks with rough labels and then jointly trains with accurate gaze labels. To enable good cross-subject performance under various conditions, we collect a large dataset which has full coverage of head poses and eyeball movements, contains 200 subjects, and has diverse illumination conditions. Our deep solution achieves state-of-the-art gaze tracking accuracy, reaching 5.6° cross-subject prediction error using a small network running at 1000 fps on a single CPU (excluding face alignment time) and 4.3° cross-subject error with a deeper network." ] }
1907.02364
2947492009
By borrowing the wisdom of human in gaze following, we propose a two-stage solution for gaze point prediction of the target persons in a scene. Specifically, in the first stage, both head image and its position are fed into a gaze direction pathway to predict the gaze direction, and then multi-scale gaze direction fields are generated to characterize the distribution of gaze points without considering the scene contents. In the second stage, the multi-scale gaze direction fields are concatenated with the image contents and fed into a heatmap pathway for heatmap regression. There are two merits for our two-stage solution based gaze following: i) our solution mimics the behavior of human in gaze following, therefore it is more psychological plausible; ii) besides using heatmap to supervise the output of our network, we can also leverage gaze direction to facilitate the training of gaze direction pathway, therefore our network can be more robustly trained. Considering that existing gaze following dataset is annotated by the third-view persons, we build a video gaze following dataset, where the ground truth is annotated by the observers in the videos. Therefore it is more reliable. The evaluation with such a dataset reflects the capacity of different methods in real scenarios better. Extensive experiments on both datasets show that our method significantly outperforms existing methods, which validates the effectiveness of our solution for gaze following. Our dataset and codes are released in this https URL.
Saliency detection and gaze following are two different tasks @cite_13 @cite_21 even though they were closely related. Saliency detection predicts fixation map from observers out of the original images @cite_22 @cite_24 @cite_25 . Gaze following in image predicts the position that people in a scene were looking at. Previous works about saliency prediction considered the low-level features and saliency maps at different scales @cite_12 . Subsequently, the features from different levels were combined to model a bottom-up, top-down architecture @cite_24 . Recently, deep neural networks have been applied to saliency prediction and achieve great success @cite_28 @cite_17 . However, the object in the gaze point region may be not salient, which reveals that it is hard to find the gaze point through a saliency algorithm directly.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_21", "@cite_24", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2144764737", "1946606198", "2776312359", "1510835000", "", "2963985934", "2128272608", "2583180462" ], "abstract": [ "Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention.", "Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of (2012), we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes.", "Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze in video by predicting where a person (in the video) is looking even when the object is in a different frame. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one frame with a person in it, our model estimates a density for gaze location in every frame and the probability that the person is looking in that particular frame. A key aspect of our approach is an end-to-end model that jointly estimates: saliency, gaze pose, and geometric relationships between views while only using gaze as supervision. Visualizations suggest that the model learns to internally solve these intermediate tasks automatically without additional supervision. Experiments show that our approach follows gaze in video better than existing approaches, enabling a richer understanding of human activities in video.", "For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.", "", "In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15 relative improvement.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https: imatge-upc.github. io saliency-salgan-2017 ." ] }
1907.02110
2954384599
Segmentation has been a major task in neuroimaging. A large number of automated methods have been developed for segmenting healthy and diseased brain tissues. In recent years, deep learning techniques have attracted a lot of attention as a result of their high accuracy in different segmentation problems. We present a new deep learning based segmentation method, DeepMRSeg, that can be applied in a generic way to a variety of segmentation tasks. The proposed architecture combines recent advances in the field of biomedical image segmentation and computer vision. We use a modified UNet architecture that takes advantage of multiple convolution filter sizes to achieve multi-scale feature extraction adaptive to the desired segmentation task. Importantly, our method operates on minimally processed raw MRI scan. We validated our method on a wide range of segmentation tasks, including white matter lesion segmentation, segmentation of deep brain structures and hippocampus segmentation. We provide code and pre-trained models to allow researchers apply our method on their own datasets.
UNet architecture was introduced by @cite_1 . UNet has been an important advancement in the application of deep CNNs to the problem of biomedical image segmentation. CNNs have been initially used on classification problems by mapping input images to output class labels. However, in segmentation tasks, the desired output is an image, e.g. a binary segmentation map. UNet extended the CNN architecture by supplementing the usual contracting or encoding path with a symmetric expanding or decoding path, where pooling operators are replaced by upsampling operators. The encoding path allows the architecture to learn spatially relevant context information and the decoding path adds the precise localization back to the architecture, leading to a final segmentation image as output of the model.
{ "cite_N": [ "@cite_1" ], "mid": [ "1901129140" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ." ] }
1907.02110
2954384599
Segmentation has been a major task in neuroimaging. A large number of automated methods have been developed for segmenting healthy and diseased brain tissues. In recent years, deep learning techniques have attracted a lot of attention as a result of their high accuracy in different segmentation problems. We present a new deep learning based segmentation method, DeepMRSeg, that can be applied in a generic way to a variety of segmentation tasks. The proposed architecture combines recent advances in the field of biomedical image segmentation and computer vision. We use a modified UNet architecture that takes advantage of multiple convolution filter sizes to achieve multi-scale feature extraction adaptive to the desired segmentation task. Importantly, our method operates on minimally processed raw MRI scan. We validated our method on a wide range of segmentation tasks, including white matter lesion segmentation, segmentation of deep brain structures and hippocampus segmentation. We provide code and pre-trained models to allow researchers apply our method on their own datasets.
As noted by @cite_0 , when convolutional filters are arranged in different groups, the network can learn distinct features from each group, with low correlation between the learned features across groups. This was demonstrated in AlexNet, where the network consistently identified color-agnostic and color-specific features in different filter groups. The same concept also applies to Inception network, where through the use of different convolutional filters at a single layer the network learns feature representations at different resolution levels.
{ "cite_N": [ "@cite_0" ], "mid": [ "2163605009" ], "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1812.06329
2904942582
While many approaches have been proposed to analyze the problem of matrix multiplication parallel computing, few of them address the problem on heterogeneous processor platforms. It still remains an open question on heterogeneous processor platforms to find the optimal schedule that balances the load within the heterogeneous processor set while minimizing the amount of communication. A great many studies are based on rectangular partition, whereas the optimality of rectangular partition as the basis has not been well justified. In this paper, we propose a new method that schedules matrix multiplication on heterogeneous processor platforms with the mixed co-design goal of minimizing the total communication volume and the multiplication completion time. We first present the schema of our layer based partition (LBP) method. Subsequently, we demonstrate that our approach guarantees minimal communication volume, which is smaller than what rectangular partition can reach. We further analyze the problem of minimizing the task completion time, with network topologies taken into account. We solve this problem in both single-neighbor network case and multi-neighbor network case. In single-neighbor network cases, we propose an equality based method to solve LBP, and simulation shows that the total communication volume is reduced by 75 from the lower bound of rectangular partition. In multi-neighbor network cases, we formulate LBP as a Mixed Integer Programming problem, and reduce the total communication volume by 81 through simulation. To summarize, this is a promising perspective of tackling matrix multiplication problems on heterogeneous processor platforms.
Approaches on homogeneous platforms. Homogeneous platforms assume that all the computing comunication resources and environment are identical. Matrix multiplication scheduling on homogeneous platforms have been extensively studied in @cite_9 - @cite_29 , among which, Canon introduces the first parallel algorithm on homogeneous grids @cite_9 . Fox extend the analysis on two-dimensional mesh and hypercubes @cite_8 . SUMMA overcomes the shortcomings of Cannon's and Fox's, and becomes the most widely applied parallel matrix multiplication scheme @cite_13 . Solomonik @cite_24 propose a method known as the 2.5D Algorithm', which can achieve asymptotically less communication than Canon's algorithm and be faster in practise. Malysiak @cite_21 present a novel model of distributing matrix multiplication within homogeneous systems with multiple hierarchies. Scheduling of sparse-dense matrix multiplication has been studied in @cite_26 @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_9", "@cite_29", "@cite_21", "@cite_24", "@cite_23", "@cite_13" ], "mid": [ "2028045344", "2082292996", "", "2075711817", "2249672196", "201315547", "2483598939", "" ], "abstract": [ "The sparse matrix-vector multiplication is an important computational kernel, but is hard to efficiently execute even in the sequential case. The problems--namely low arithmetic intensity, inefficient cache use, and limited memory bandwidth--are magnified as the core count on shared-memory parallel architectures increases. Existing techniques are discussed in detail, and categorized chiefly based on their distribution types. Based on this, new parallelization techniques are proposed. The theoretical scalability and memory usage of the various strategies are analyzed, and experiments on multiple NUMA architectures confirm the validity of the results. One of the newly proposed methods attains the best average result in experiments on a large set of matrices. In one of the experiments it obtains a parallel efficiency of 90 percent, while on average it performs close to 60 percent.", "Abstract We discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology. We present detailed performance measurements on hypercubes with 4, 16, and 64 nodes, and analyze them in terms of communication overhead and load balancing. We show that the decomposition into square subblocks is optimal C code implementing the algorithms is available.", "", "A large number of algorithms have been developed for solving large dimension matrix multiplication through parallel computation. Lots of algorithms have been developed keeping performance matrices such as speed up, efficiency, isoefficiency etc. in linear order. We have compared the performance of simple block checkerboard partitioning algorithm with cannon's algorithm over 2D mesh topology in HPC Maverick (Rocks 5.4) by taking the mathematical problem matrix multiplication. Till the date not any of the algorithms clearly claimed to be superior then the others. It seems to be advantageous to partition matrix into blocks for multiplying on the 2D Mesh.", "We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.", "Extra memory allows parallel matrix multiplication to be done with asymptotically less communication than Cannon's algorithm and be faster in practice. \"3D\" algorithms arrange the p processors in a 3D array, and store redundant copies of the matrices on each of p1 3 layers. \"2D\" algorithms such as Cannon's algorithm store a single copy of the matrices on a 2D array of processors. We generalize these 2D and 3D algorithms by introducing a new class of \"2.5D algorithms\". For matrix multiplication, we can take advantage of any amount of extra memory to store c copies of the data, for any c ∈ 1, 2,..., ⌊p1 3⌋ , to reduce the bandwidth cost of Cannon's algorithm by a factor of c1 2 and the latency cost by a factor c3 2. We also show that these costs reach the lower bounds, modulo polylog(p) factors. We introduce a novel algorithm for 2.5D LU decomposition. To the best of our knowledge, this LU algorithm is the first to minimize communication along the critical path of execution in the 3D case. Our 2.5D LU algorithm uses communicationavoiding pivoting, a stable alternative to partial-pivoting. We prove a novel lower bound on the latency cost of 2.5D and 3D LU factorization, showing that while c copies of the data can also reduce the bandwidth by a factor of c1 2, the latency must increase by a factor of c1 2, so that the 2D LU algorithm (c = 1) in fact minimizes latency. We provide implementations and performance results for 2D and 2.5D versions of all the new algorithms. Our results demonstrate that 2.5D matrix multiplication and LU algorithms strongly scale more efficiently than 2D algorithms. Each of our 2.5D algorithms performs over 2X faster than the corresponding 2D algorithm for certain problem sizes on 65,536 cores of a BG P supercomputer.", "Multiplication of a sparse matrix with a dense matrix is a building block of an increasing number of applications in many areas such as machine learning and graph algorithms. However, most previous work on parallel matrix multiplication considered only both dense or both sparse matrix operands. This paper analyzes the communication lower bounds and compares the communication costs of various classic parallel algorithms in the context of sparse-dense matrix-matrix multiplication. We also present new communication-avoiding algorithms based on a 1D decomposition, called 1.5D, which -- while suboptimal in dense-dense and sparse-sparse cases -- outperform the 2D and 3D variants both theoretically and in practice for sparse-dense multiplication. Our analysis separates one-time costs from per iteration costs in an iterative machine learning context. Experiments demonstrate speedups up to 100x over a baseline 3D SUMMA implementation and show parallel scaling over 10 thousand cores.", "" ] }
1812.06329
2904942582
While many approaches have been proposed to analyze the problem of matrix multiplication parallel computing, few of them address the problem on heterogeneous processor platforms. It still remains an open question on heterogeneous processor platforms to find the optimal schedule that balances the load within the heterogeneous processor set while minimizing the amount of communication. A great many studies are based on rectangular partition, whereas the optimality of rectangular partition as the basis has not been well justified. In this paper, we propose a new method that schedules matrix multiplication on heterogeneous processor platforms with the mixed co-design goal of minimizing the total communication volume and the multiplication completion time. We first present the schema of our layer based partition (LBP) method. Subsequently, we demonstrate that our approach guarantees minimal communication volume, which is smaller than what rectangular partition can reach. We further analyze the problem of minimizing the task completion time, with network topologies taken into account. We solve this problem in both single-neighbor network case and multi-neighbor network case. In single-neighbor network cases, we propose an equality based method to solve LBP, and simulation shows that the total communication volume is reduced by 75 from the lower bound of rectangular partition. In multi-neighbor network cases, we formulate LBP as a Mixed Integer Programming problem, and reduce the total communication volume by 81 through simulation. To summarize, this is a promising perspective of tackling matrix multiplication problems on heterogeneous processor platforms.
In summary, researchers have begun to realize the difficulty in finding the optimal that balance loads while minimize communication volume. Beaumont's work @cite_22 explicitly reveals that it is a NP-complete problem. Moreover, though alternative perspectives like have been proposed @cite_27 @cite_17 , those perspectives keep the majority of the partitions still in the shape of rectangle, which do not completely resolve the restriction brought by geometrical shapes. In contrast, in scheme, we avoid the NP-complete problem and make it very easy to obtain a communication-optimal partition, which reaches the lower bound of total communication volume.
{ "cite_N": [ "@cite_27", "@cite_22", "@cite_17" ], "mid": [ "2160556361", "2163674731", "1560662781" ], "abstract": [ "Parallel Matrix-Matrix Multiplication (MMM) is a fundamental part of the linear algebra libraries used by scientific applications on high performance computers. As heterogeneous systems have emerged as high performance computing platforms, the traditional homogeneous algorithms have been adapted to these heterogeneous environments. Although heterogeneous systems have been in use for some time, it remains an open problem of how to optimally partition data on heterogeneous processors to minimize computation, communication, and execution time. While the question of how to subdivide these MMM problems among heterogeneous processors has been studied, the underlying assumption of this prior study is that the data partition shape, the layout of the data within the matrix assigned to each processor, should be rectangular, i.e. that each processor should be assigned a rectangular portion of the matrix to compute. Our previous work in this area questioned the optimality of this traditional rectangular shape and studied this partition shape problem for two processors. In that work, we proposed a novel mathematical method for transforming partition shapes to decrease communication cost and an analytical technique for determining the optimal shape. In this work, we extend this technique to apply to three and more heterogeneous processors. While applying this method to two processors is relatively straightforward, the complexity grows immensely when considering three processors. With this complexity in mind, we propose a hybrid of experimental and analytical techniques. We postulate that a small number of partition shapes are potentially optimal, and perform extensive testing using a computer aided method to apply our previously developed analytical technique, without finding a counterexample. We identified six data partition shapes which are candidates to be the optimal three processor shape.", "We address the issue of implementing matrix multiplication on heterogeneous platforms. We target two different classes of heterogeneous computing resources: heterogeneous networks of workstations and collections of heterogeneous clusters. Intuitively, the problem is to load balance the work with different speed resources while minimizing the communication volume. We formally state this problem in a geometric framework and prove its NP-completeness. Next, we introduce a (polynomial) column-based heuristic, which turns out to be very satisfactory: We derive a theoretical performance guarantee for the heuristic and we assess its practical usefulness through MPI experiments.", "" ] }
1812.06156
2952311206
As of today, abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that deny, disrupt, degrade or deceive other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive birds, in the wild. We provide a comprehensive set of features based on users' attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user's accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in Twitter. To the best of our knowledge, we are the first to propose such a similarity metric to characterize abuse in Twitter.
To characterize abuse without considering the content of the communication, graph-based techniques have been proven useful for detecting and combating dishonest behavior @cite_22 and cyberbullying @cite_23 , as well as to detect fake accounts in OSN @cite_10 . However, they suffer from the fact that real-world social graphs do not always conform to the key assumptions made about the system. Thus, it is not easy to prevent attackers from infiltrating the OSN or micro-blogging platform in order to deceive others into befriending them. Consequently, these Sybil accounts can still create the illusion of being strongly connected to a cluster of legitimate user accounts, which in turn would render such graph-based Sybil defenses useless. On the other hand and yet in the context of OSN, graph-based Sybil defenses can benefit from supervised machine learning techniques that consider a wider range of metadata as input into the feature set in order to predict potential victims of abuse @cite_3 . Facebook Immune System (FIS) uses information from user activity logs to automatically detect and act upon suspicious behaviors in the OSN. Such automated or semi-automated methods are not perfect. In relation to the FIS, @cite_0 found that only about 20
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_0", "@cite_23", "@cite_10" ], "mid": [ "1546721273", "2029749307", "1992685726", "84853486", "2168508162" ], "abstract": [ "Dishonest behaviors in on-line networks include the problems caused by those actions performed by certain elements in a network in order to obtain some kind of benefits from the system. The analysis of this phenomenon concerns the WWW from two points of view: the Web as a collection of interrelated documents, and the social networks. In this work we study the web spam detection and the computation of trust and reputation in on-line social networks. We propose two graph-based ranking algorithms, based on different propagation models that spread the information from a set of elements in the network to compute the global relevance of all the nodes in the system.", "Traditional defense mechanisms for fighting against automated fake accounts in online social networks are victim-agnostic. Even though victims of fake accounts play an important role in the viability of subsequent attacks, there is no work on utilizing this insight to improve the status quo. In this position paper, we take the first step and propose to incorporate predictions about victims of unknown fakes into the workflows of existing defense mechanisms. In particular, we investigated how such an integration could lead to more robust fake account defense mechanisms. We also used real-world datasets from Facebook and Tuenti to evaluate the feasibility of predicting victims of fake accounts using supervised machine learning.", "Online Social Networks (OSNs) have become an integral part of today's Web. Politicians, celebrities, revolutionists, and others use OSNs as a podium to deliver their message to millions of active web users. Unfortunately, in the wrong hands, OSNs can be used to run astroturf campaigns to spread misinformation and propaganda. Such campaigns usually start off by infiltrating a targeted OSN on a large scale. In this paper, we evaluate how vulnerable OSNs are to a large-scale infiltration by socialbots: computer programs that control OSN accounts and mimic real users. We adopt a traditional web-based botnet design and built a Socialbot Network (SbN): a group of adaptive socialbots that are orchestrated in a command-and-control fashion. We operated such an SbN on Facebook---a 750 million user OSN---for about 8 weeks. We collected data related to users' behavior in response to a large-scale infiltration where socialbots were used to connect to a large number of Facebook users. Our results show that (1) OSNs, such as Facebook, can be infiltrated with a success rate of up to 80 , (2) depending on users' privacy settings, a successful infiltration can result in privacy breaches where even more users' data are exposed when compared to a purely public access, and (3) in practice, OSN security defenses, such as the Facebook Immune System, are not effective enough in detecting or stopping a large-scale infiltration as it occurs.", "The use of new technologies along with the popularity of social networks has given the power of anonymity to the users. The ability to create an alter-ego with no relation to the actual user, creates a situation in which no one can certify the match between a profile and a real person. This problem generates situations, repeated daily, in which users with fake accounts, or at least not related to their real identity, publish news, reviews or multimedia material trying to discredit or attack other people who may or may not be aware of the attack. These acts can have great impact on the affected victims’ environment generating situations in which virtual attacks escalate into fatal consequences in real life. In this paper, we present a methodology to detect and associate fake profiles on Twitter social network which are employed for defamatory activities to a real profile within the same network by analysing the content of comments generated by both profiles. Accompanying this approach we also present a successful real life use case in which this methodology was applied to detect and stop a cyberbullying situation in a real elementary school.", "Users increasingly rely on the trustworthiness of the information exposed on Online Social Networks (OSNs). In addition, OSN providers base their business models on the marketability of this information. However, OSNs suffer from abuse in the form of the creation of fake accounts, which do not correspond to real humans. Fakes can introduce spam, manipulate online rating, or exploit knowledge extracted from the network. OSN operators currently expend significant resources to detect, manually verify, and shut down fake accounts. Tuenti, the largest OSN in Spain, dedicates 14 full-time employees in that task alone, incurring a significant monetary cost. Such a task has yet to be successfully automated because of the difficulty in reliably capturing the diverse behavior of fake and real OSN profiles. We introduce a new tool in the hands of OSN operators, which we call SybilRank. It relies on social graph properties to rank users according to their perceived likelihood of being fake (Sybils). SybilRank is computationally efficient and can scale to graphs with hundreds of millions of nodes, as demonstrated by our Hadoop prototype. We deployed SybilRank in Tuenti's operation center. We found that ∼90 of the 200K accounts that SybilRank designated as most likely to be fake, actually warranted suspension. On the other hand, with Tuenti's current user-report-based approach only ∼5 of the inspected accounts are indeed fake." ] }
1812.06156
2952311206
As of today, abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that deny, disrupt, degrade or deceive other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive birds, in the wild. We provide a comprehensive set of features based on users' attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user's accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in Twitter. To the best of our knowledge, we are the first to propose such a similarity metric to characterize abuse in Twitter.
Firstly, previous datasets in this area are not yet released or in their infancy for verification of their applicability as abuse ground truth gold standard. The authors of @cite_14 claim to outperform deep learning techniques to detect hate speech, derogatory language and profanity. They compare their results with a previous dataset from @cite_12 and assess the accuracy of detecting abusive language with distributional semantic features to find out that it does largely depends upon the evolution of the content that abusers post in the platform or else having to retrain the model.
{ "cite_N": [ "@cite_14", "@cite_12" ], "mid": [ "2340954483", "1071251684" ], "abstract": [ "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-of-the-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.", "We address the problem of hate speech detection in online user comments. Hate speech, defined as an \"abusive speech targeting specific group characteristics, such as ethnicity, religion, or gender\", is an important problem plaguing websites that allow users to leave feedback, having a negative impact on their online business and overall user experience. We propose to learn distributed low-dimensional representations of comments using recently proposed neural language models, that can then be fed as inputs to a classification algorithm. Our approach addresses issues of high-dimensionality and sparsity that impact the current state-of-the-art, resulting in highly efficient and effective hate speech detectors." ] }
1812.06156
2952311206
As of today, abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that deny, disrupt, degrade or deceive other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive birds, in the wild. We provide a comprehensive set of features based on users' attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user's accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in Twitter. To the best of our knowledge, we are the first to propose such a similarity metric to characterize abuse in Twitter.
Finally, it is worth mentioning we in our feature set do not include sentiment analysis inputs as @cite_6 did; simply because we are interested in complex types of abuse that require more than just textual content analysis. Additionally, we have noticed that while some words or expressions may seem abusive at first (e.g., vulgar language), they are not when the conversation takes place between participants that know each other well or are mutually connected in the social graph (e.g., family relatives).
{ "cite_N": [ "@cite_6" ], "mid": [ "2951392332" ], "abstract": [ "Sentiment in social media is increasingly considered as an important resource for customer segmentation, market understanding, and tackling other socio-economic issues. However, sentiment in social media is difficult to measure since user-generated content is usually short and informal. Although many traditional sentiment analysis methods have been proposed, identifying slang sentiment words remains untackled. One of the reasons is that slang sentiment words are not available in existing dictionaries or sentiment lexicons. To this end, we propose to build the first sentiment dictionary of slang words to aid sentiment analysis of social media content. It is laborious and time-consuming to collect and label the sentiment polarity of a comprehensive list of slang words. We present an approach to leverage web resources to construct an extensive Slang Sentiment word Dictionary (SlangSD) that is easy to maintain and extend. SlangSD is publicly available for research purposes. We empirically show the advantages of using SlangSD, the newly-built slang sentiment word dictionary for sentiment classification, and provide examples demonstrating its ease of use with an existing sentiment system." ] }
1812.06407
2904185608
Visual Tracking is a complex problem due to unconstrained appearance variations and dynamic environment. Extraction of complementary information from the object environment via multiple features and adaption to the target's appearance variations are the key problems of this work. To this end, we propose a robust object tracking framework based on Unified Graph Fusion (UGF) of multi-cue to adapt to the object's appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multi-cue. This iterative process builds robust unified features which are invariant to object deformations, fast motion, and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric.
Mostly, generative models represent target's appearance using foreground information. In this direction, many sparse based methods @cite_18 @cite_31 @cite_3 were proposed. In @cite_18 , authors proposed weighted local sparse representation to measure the importance of each patch. The adaptive template update strategy considered incremental subspace and sparse coding for each reliable patch. On the other hand, in @cite_48 , appearance characteristics of the local patch was categorized as stable, invalid and valid with an importance weighting. A discriminative local sparse coding was used to separate the background patches from stable ones and a linear regressor was used to separate the invalid patches from valid patches. Xue al @cite_47 exploited three types of spatial-temporal context information for long, medium, and short term components during visual tracking. Visual spatial information was used for sample selection. Chen al @cite_15 used frame difference masks to encode large-scale corruption information into a modulated dynamic sample dictionary. Hu al @cite_50 exploited structure-aware local sparse coding for encoding each candidate sample. In this, encoding samples were reduced by using global sparsity constraint on coding coefficient. Generally, sparse based generative methods cannot handle many of the tracking challenges due to lack of background discriminative information.
{ "cite_N": [ "@cite_18", "@cite_31", "@cite_15", "@cite_48", "@cite_3", "@cite_50", "@cite_47" ], "mid": [ "2803629456", "", "2509359898", "2809516743", "", "2786383677", "2732089612" ], "abstract": [ "Sparse representation has been widely exploited to develop an effective appearance model for object tracking due to its well discriminative capability in distinguishing the target from its surrounding background. However, most of these methods only consider either the holistic representation or the local one for each patch with equal importance, and hence may fail when the target suffers from severe occlusion or large-scale pose variation. In this paper, we propose a simple yet effective approach that exploits rich feature information from reliable patches based on weighted local sparse representation that takes into account the importance of each patch. Specifically, we design a reconstruction-error based weight function with the reconstruction error of each patch via sparse coding to measure the patch reliability. Moreover, we explore spatio-temporal context information to enhance the robustness of the appearance model, in which the global temporal context is learned via incremental subspace and sparse representation learning with a novel dynamic template update strategy to update the dictionary, while the local spatial context considers the correlation between the target and its surrounding background via measuring the similarity among their sparse coefficients. Extensive experimental evaluations on two large tracking benchmarks demonstrate favorable performance of the proposed method over some state-of-the-art trackers.", "", "Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.", "In this paper, we propose a novel local sparse representation-based tracking framework for visual tracking. To deeply mine the appearance characteristics of different local patches, the proposed method divides all local patches of a candidate target into three categories, which are stable patches, valid patches, and invalid patches. All these patches are assigned different weights to consider the different importance of the local patches. For stable patches, we introduce a local sparse score to identify them, and discriminative local sparse coding is developed to decrease the weights of background patches among the stable patches. For valid patches and invalid patches, we adopt local linear regression to distinguish the former from the latter. Furthermore, we propose a weight shrinkage method to determine weights for different valid patches to make our patch weight computation more reasonable. Experimental results on public tracking benchmarks with challenging sequences demonstrate that the proposed method performs favorably against other state-of-the-art tracking methods.", "", "Sparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm, which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show the local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous state-of-the-art methods.", "In order to tackle the incomplete and inaccurate of the samples in most tracking-by-detection algorithms, this paper presents an object tracking algorithm, termed as multi-scale spatio-temporal context (MSTC) learning tracking. MSTC collaboratively explores three different types of spatio-temporal contexts, named the long-term historical targets, the medium-term stable scene (i.e., a short continuous and stable video sequence), and the short-term overall samples to improve the tracking efficiency and reduce the drift phenomenon. Different from conventional multi-timescale tracking paradigm that chooses samples in a fixed manner, MSTC formulates a low-dimensional representation named fast perceptual hash algorithm to update long-term historical targets and the medium-term stable scene dynamically with image similarity. MSTC also differs from most tracking-by-detection algorithms that label samples as positive or negative, it investigates a fusion salient sample detection to fuse weights of the samples not only by the distance information, but also by the visual spatial attention, such as color, intensity, and texture. Numerous experimental evaluations with most state-of-the-art algorithms on the standard 50 video benchmark demonstrate the superiority of the proposed algorithm." ] }
1812.06407
2904185608
Visual Tracking is a complex problem due to unconstrained appearance variations and dynamic environment. Extraction of complementary information from the object environment via multiple features and adaption to the target's appearance variations are the key problems of this work. To this end, we propose a robust object tracking framework based on Unified Graph Fusion (UGF) of multi-cue to adapt to the object's appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multi-cue. This iterative process builds robust unified features which are invariant to object deformations, fast motion, and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric.
Recently, many deep learning based trackers @cite_13 @cite_33 @cite_21 were proposed pertaining to their favorable performance to model the target's appearance. In @cite_13 , authors considered the problem of visual tracking as a trajectory estimation task using convolutional and recurrent units. A large training data is required to predict trajectories, which is often unavailable. In @cite_2 , author exploited part context information and importance reliability to preserve target's spatial information using deep regression. In @cite_46 , authors used a Siamese network with online adaptive hedge algorithm for similarity measure between the target and weak experts based on CNN layer. Similarly, Gao al @cite_43 train a Siamese neural network to construct a structural relationship between image patches. Mostly, deep learning based visual trackers require pre-training of the neural network on a large auxiliary image dataset. To overcome pre-training, Zhang al @cite_38 initialize random convolutional filters and only train the fully connected layer. Even then, deep-learning based trackers could not be used in real-time scenarios and employ computationally intensive fine-tuning.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_21", "@cite_43", "@cite_2", "@cite_46", "@cite_13" ], "mid": [ "2514029627", "", "", "2579238278", "2790441826", "2800990595", "2617855130" ], "abstract": [ "Deep neural network-based methods have recently achieved excellent performance in visual tracking task. As very few training samples are available in visual tracking task, those approaches rely heavily on extremely large auxiliary dataset such as ImageNet to pretrain the model. In order to address the discrepancy between the source domain (the auxiliary data) and the target domain (the object being tracked), they need to be finetuned during the tracking process. However, those methods suffer from sensitivity to the hyper-parameters such as learning rate, maximum number of epochs, size of mini-batch, and so on. Thus, it is worthy to investigate whether pretraining and fine tuning through conventional back-prop is essential for visual tracking. In this paper, we shed light on this line of research by proposing convolutional random vector functional link (CRVFL) neural network, which can be regarded as a marriage of the convolutional neural network and random vector functional link network, to simplify the visual tracking system. The parameters in the convolutional layer are randomly initialized and kept fixed. Only the parameters in the fully connected layer need to be learned. We further propose an elegant approach to update the tracker. In the widely used visual tracking benchmark, without any auxiliary data, a single CRVFL model achieves 79.0 with a threshold of 20 pixels for the precision plot. Moreover, an ensemble of CRVFL yields comparatively the best result of 86.3 .", "", "", "Most existing tracking methods are direct trackers, which directly exploit foreground or and background information for object appearance modeling and decide whether an image patch is target object or not. As a result, these trackers cannot perform well when target appearance changes heavily and becomes different from its model. To deal with this issue, we propose a novel relative tracker, which can effectively exploit the relative relationship among image patches from both foreground and background for object appearance modeling. Different from direct trackers, the proposed relative tracker is robust to localize target object by use of the best image patch with the highest relative score to the target appearance model. To model relative relationship among large-scale image patch pairs, we propose a novel and effective deep relative learning algorithm through the convolutional neural network. We test the proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our method consistently outperforms the state-of-the-art trackers due to the powerful capacity of the proposed deep relative model.", "Most existing part-based tracking methods are part-to-part trackers, which usually have two separated steps including the part matching and target localization. Different from existing methods, in this paper, we propose a novel part-to-target (P2T) tracker in a unified fashion by inferring target location from parts directly. To achieve this goal, we propose a novel deep regression model for P2T regression in an end-to-end framework via convolutional neural networks. The proposed model is designed not only to exploit the part context information to preserve object spatial layout structure, but also to learn part reliability to emphasize part importance for the robust P2T regression. We evaluate the proposed tracker on four challenging benchmark sequences, and extensive experimental results demonstrate that our method performs favorably against state-of-the-art trackers because of the powerful capacity of the proposed deep regression model.", "Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a Siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.", "Motion models have been proved to be a crucial part in the visual tracking process. In recent trackers, particle filter and sliding windows-based motion models have been widely used. Treating motion models as a sequence prediction problem, we can estimate the motion of objects using their trajectories. Moreover, it is possible to transfer the learned knowledge from annotated trajectories to new objects. Inspired by recent advance in deep learning for visual feature extraction and sequence prediction, we propose a trajectory predictor to learn prior knowledge from annotated trajectories and transfer it to predict the motion of target objects. In this predictor, convolutional neural networks extract the visual features of target objects. Long short-term memory model leverages the annotated trajectory priors as well as sequential visual information, which includes the tracked features and center locations of the target object, to predict the motion. Furthermore, to extend this method to videos in which it is difficult to obtain annotated trajectories, a dynamic weighted motion model that combines the proposed trajectory predictor with a random sampler is proposed. To evaluate the transfer performance of the proposed trajectory predictor, we annotated a real-world vehicle dataset. Experiment results on both this real-world vehicle dataset and an online tracker benchmark dataset indicate that the proposed method outperforms several state-of-the-art trackers." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
Hysteretic Q-Learning (HQL) @cite_20 attempts to mitigate this issue by injecting overestimation into the value estimation by reducing the learning rate for negative updates. Two learning rates @math and @math , named increase rate and decrease rate, are respectively used for updating overestimated and underestimated TD error @math :
{ "cite_N": [ "@cite_20" ], "mid": [ "2108892923" ], "abstract": [ "Multi-agent systems (MAS) are a field of study of growing interest in a variety of domains such as robotics or distributed controls. The article focuses on decentralized reinforcement learning (RL) in cooperative MAS, where a team of independent learning robots (IL) try to coordinate their individual behavior to reach a coherent joint behavior. We assume that each robot has no information about its teammates' actions. To date, RL approaches for such ILs did not guarantee convergence to the optimal joint policy in scenarios where the coordination is difficult. We report an investigation of existing algorithms for the learning of coordination in cooperative MAS, and suggest a Q-learning extension for ILs, called hysteretic Q-learning. This algorithm does not require any additional communication between robots. Its advantages are showing off and compared to other methods on various applications: bi-matrix games, collaborative ball balancing task and pursuit domain." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
Hysteretic Deep Q-Network (HDQN) @cite_4 applies HQL to DQN. Using DQN as basis, TD error is given by @math . For simplicity, HDQN first sets a base learning rate @math suitable for the network (e.g. @math ), and scales the learning rate into @math and @math . In practice, HDQN usually fixes @math at @math and tunes @math and @math instead. Thus, in the following discussions, we only discuss the choice and effect of @math (the decrease rate) under the assumption that @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "2951896791" ], "abstract": [ "Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrently-exploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to learn and store distinct policies for each task, but in practice identities of tasks are often non-observable, making these approaches inapplicable. This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability. We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
In order to reason under partial observability, Hysteretic Deep Recurrent Q-Network (HDRQN), introduced by , utilizes a recurrent layer (LSTM) and is trained using experience traces sampled from an experience buffer. Decentralized buffers (called CERTs) featuring sample synchronization was adopted to stabilize training. When using CERTs (Concurrent Experience Replay Trajectories), every agent has their own experience buffer with a deterministic seed. Thus, at each sampling operation, traces of the same time steps are sampled across agents. Concurrent sampling during training has the motivation of stabilizing coordination despite shadowed equilibria, avoiding diverging policies. Earlier attempts disabled experience reply due to non-concurrent evolving across agents' policies @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2255045308" ], "abstract": [ "We propose deep distributed recurrent Q-networks (DDRQN), which enable teams of agents to learn to solve communication-based coordination tasks. In these tasks, the agents are not given any pre-designed communication protocol. Therefore, in order to successfully communicate, they must first automatically develop and agree upon their own communication protocol. We present empirical results on two multi-agent learning problems based on well-known riddles, demonstrating that DDRQN can successfully solve such tasks and discover elegant communication protocols to do so. To our knowledge, this is the first time deep reinforcement learning has succeeded in learning communication protocols. In addition, we present ablation experiments that confirm that each of the main components of the DDRQN architecture are critical to its success." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
Lenient Learning @cite_26 schedules the decrease of leniency applied to individual state-action pairs using decaying temperatures, where leniency is the probability of ignoring a negative @math value update.
{ "cite_N": [ "@cite_26" ], "mid": [ "2108449787" ], "abstract": [ "In concurrent learning algorithms, an agent's perception of the joint search space depends on the actions currently chosen by the other agents. These perceptions change as each agent's action selection is influenced by its learning. We observe that agents that show lenience to their teammates achieve more accurate perceptions of the overall learning task. Additionally, lenience appears more beneficial at early stages of learning, when the agent's teammates are merely exploring their actions, and less helpful as the agents start to converge. We propose two multiagent learning algorithms where agents exhibit a variable degree of lenience, and we demonstrate their advantages in several coordination problems." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
IQN @cite_1 is a single-agent Deep RL method which we extend to multi-agent partially observable settings. As a distributional RL method, quantile networks represent a distribution over returns, denoted @math , where @math , by estimating the inverse c.d.f. of @math , denoted @math . Implicit Quantile Networks estimate @math for a given state-action pair, @math , from samples drawn from some base distribution ranging from 0 to 1: @math , where @math is the quantile value that the network aims to estimate. The estimated expected return can be obtained by averaging over multiple quantile estimates: where @math distorts risk sensitivity. Risk neutrality is achieved when @math . In Section we will discuss how we distort risk in multi-agent domains and do so in a dynamic fashion where risk approaches neutral as exploration probability approaches @math .
{ "cite_N": [ "@cite_1" ], "mid": [ "2803308811" ], "abstract": [ "In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN. We achieve this by using quantile regression to approximate the full quantile function for the state-action return distribution. By reparameterizing a distribution over the sample space, this yields an implicitly defined return distribution and gives rise to a large class of risk-sensitive policies. We demonstrate improved performance on the 57 Atari 2600 games in the ALE, and use our algorithm's implicitly defined distributions to study the effects of risk-sensitive policies in Atari games." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
The quantile regression loss @cite_23 for estimating quantile at @math and error @math is defined using Huber loss @math with threshold @math which weighs overestimation by @math and underestimation by @math , @math is used for linear loss.
{ "cite_N": [ "@cite_23" ], "mid": [ "1607777023" ], "abstract": [ "This specification discloses a reduction gear transmission comprising an input shaft carrying a pinion, a satellite carrier having a gear meshing with the pinion, a composite satellite gear on said carrier and including two sections, one with a greater number of teeth than the other, an output gear meshing with the satellite gear section having the smaller number of teeth, an output shaft drivably carrying the output gear, and a holding gear meshing with the other section of the satellite gear and having a hub in which the output shaft is journalled. The hub extends through an opening in the housing in which the aforesaid gear mechanism is mounted. An infinite ratio gear assembly is operatively associated with the hub to control its rate of rotation. This gear assembly includes an inner pair of beveled gears in confronting relation, relatively axially movable, and keyed to the output shaft; an outer pair of beveled ring gears in confronting relation to each other and also confronting the inner beveled gears, a mechanical interlock to cause said outer beveled gears to rotate in unison, all of the faces of said beveled gears having radial grooves, a pin ring disposed in the space defined by the faces of the beveled gears, pins carried by said pin ring and having ends received in said grooves, and a ring shifting device to move the ring radially and thereby adjust the radial positions of the pin ends in the grooves." ] }
1812.06319
2912563296
Recent successes of value-based multi-agent deep reinforcement learning employ optimism by limiting underestimation updates of value function estimator, through carefully controlled learning rate (, 2017) or reduced update probability (, 2018). To achieve full cooperation when learning independently, an agent must estimate the state values contingent on having optimal teammates; therefore, value overestimation is frequency injected to counteract negative effects caused by unobservable teammate sub-optimal policies and explorations. Aiming to solve this issue through automatic scheduling, this paper introduces a decentralized quantile estimator, which we found empirically to be more stable, sample efficient and more likely to converge to the joint optimal policy.
Distributional learning have long been considered a promising approach in approximate reinforcement learning due to reduced chattering @cite_14 @cite_21 . Furthermore, distributional RL methods have shown, in single agent settings, robustness to hyperparameter variation and to have superior sample complexity and performance @cite_3 .
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_3" ], "mid": [ "1547105496", "1575592356", "2798705390" ], "abstract": [ "The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of approximate value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.", "", "This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting. We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. We also combine this technique with a number of additional, simple improvements such as the use of @math -step returns and prioritized experience replay. Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions. Our results show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance." ] }
1812.06269
2904040126
The pull-based development process has become prevalent on platforms such as GitHub as a form of distributed software development. Potential contributors can create and submit a set of changes to a software project through pull requests. These changes can be accepted, discussed or rejected by the maintainers of the software project, and can influence further contribution proposals. As such, it is important to examine the practices that encourage contributors to a project to submit pull requests. Specifically, we consider the impact of prior pull requests on the acceptance or rejection of subsequent pull requests. We also consider the potential effect of rejecting or ignoring pull requests on further contributions. In this preliminary research, we study three large projects on GitHub , using pull request data obtained through the GitHub API, and we perform empirical analyses to investigate the above questions. Our results show that continued contribution to a project is correlated with higher pull request acceptance rates and that pull request rejections lead to fewer future contributions.
Gousios and Zaidman proposed a PR dataset @cite_9 including 900 projects and 350,000 PRs extracted using GHTorrent. Through a mixed-method analysis of 291 GitHub projects, @cite_3 established that the PR-based development approach is used as frequently as the shared repository approach on GitHub . They observed that most PRs are short, receive few comments and are processed quickly. They also found that most PR rejections are due to the distributed nature of the pull-based process (e.g., PRs that are already obsolete upon creation).
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2125854594", "2139092060" ], "abstract": [ "Pull requests form a new method for collaborating in distributed software development. To study the pull request distributed development model, we constructed a dataset of almost 900 projects and 350,000 pull requests, including some of the largest users of pull requests on Github. In this paper, we describe how the project selection was done, we analyze the selected features and present a machine learning tool set for the R statistics environment.", "The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority." ] }
1812.06269
2904040126
The pull-based development process has become prevalent on platforms such as GitHub as a form of distributed software development. Potential contributors can create and submit a set of changes to a software project through pull requests. These changes can be accepted, discussed or rejected by the maintainers of the software project, and can influence further contribution proposals. As such, it is important to examine the practices that encourage contributors to a project to submit pull requests. Specifically, we consider the impact of prior pull requests on the acceptance or rejection of subsequent pull requests. We also consider the potential effect of rejecting or ignoring pull requests on further contributions. In this preliminary research, we study three large projects on GitHub , using pull request data obtained through the GitHub API, and we perform empirical analyses to investigate the above questions. Our results show that continued contribution to a project is correlated with higher pull request acceptance rates and that pull request rejections lead to fewer future contributions.
@cite_7 studied the factors that contribute to latency in PR reviews, defining this latency as the time interval between pull request creation and closing date''. They found that PR latency is mainly affected by process-related factors such as whether a PR was assigned to a specific reviewer or not. They also found that continuous integration is a dominant factor in PR latency.
{ "cite_N": [ "@cite_7" ], "mid": [ "1992105838" ], "abstract": [ "The pull-based development model, enabled by git and popularised by collaborative coding platforms like Bit Bucket, Gitorius, and GitHub, is widely used in distributed software teams. While this model lowers the barrier to entry for potential contributors (since anyone can submit pull requests to any repository), it also increases the burden on integrators (i.e., Members of a project's core team, responsible for evaluating the proposed changes and integrating them into the main development line), who struggle to keep up with the volume of incoming pull requests. In this paper we report on a quantitative study that tries to resolve which factors affect pull request evaluation latency in GitHub. Using regression modeling on data extracted from a sample of GitHub projects using the Travis-CI continuous integration service, we find that latency is a complex issue, requiring many independent variables to explain adequately." ] }
1812.06269
2904040126
The pull-based development process has become prevalent on platforms such as GitHub as a form of distributed software development. Potential contributors can create and submit a set of changes to a software project through pull requests. These changes can be accepted, discussed or rejected by the maintainers of the software project, and can influence further contribution proposals. As such, it is important to examine the practices that encourage contributors to a project to submit pull requests. Specifically, we consider the impact of prior pull requests on the acceptance or rejection of subsequent pull requests. We also consider the potential effect of rejecting or ignoring pull requests on further contributions. In this preliminary research, we study three large projects on GitHub , using pull request data obtained through the GitHub API, and we perform empirical analyses to investigate the above questions. Our results show that continued contribution to a project is correlated with higher pull request acceptance rates and that pull request rejections lead to fewer future contributions.
Rahman and Roy @cite_15 categorised the technical issues discussed in PR comments and analysed information about projects and developers to obtain insights into PR acceptance or rejection. They discovered that the rate of PR rejection is highly correlated to the programming language used (e.g., Java PRs are more frequently rejected than PRs for the C programming language), the application domain of the project (e.g., the database application domain sees fewer merged PRs than the IDE domain), the maturity of a project (older projects accept fewer PRs) and the number of developers on the project.
{ "cite_N": [ "@cite_15" ], "mid": [ "1994598608" ], "abstract": [ "Given the increasing number of unsuccessful pull requests in GitHub projects, insights into the success and failure of these requests are essential for the developers. In this paper, we provide a comparative study between successful and unsuccessful pull requests made to 78 GitHub base projects by 20,142 developers from 103,192 forked projects. In the study, we analyze pull request discussion texts, project specific information (e.g., domain, maturity), and developer specific information (e.g., experience) in order to report useful insights, and use them to contrast between successful and unsuccessful pull requests. We believe our study will help developers overcome the issues with pull requests in GitHub, and project administrators with informed decision making." ] }
1812.06384
2951389634
Text effects transfer technology automatically makes the text dramatically more impressive. However, previous style transfer methods either study the model for general style, which cannot handle the highly-structured text effects along the glyph, or require manual design of subtle matching criteria for text effects. In this paper, we focus on the use of the powerful representation abilities of deep neural features for text effects transfer. For this purpose, we propose a novel Texture Effects Transfer GAN (TET-GAN), which consists of a stylization subnetwork and a destylization subnetwork. The key idea is to train our network to accomplish both the objective of style transfer and style removal, so that it can learn to disentangle and recombine the content and style features of text effects images. To support the training of our network, we propose a new text effects dataset with as much as 64 professionally designed styles on 837 characters. We show that the disentangled feature representations enable us to transfer or remove all these styles on arbitrary glyphs using one network. Furthermore, the flexible network design empowers TET-GAN to efficiently extend to a new text style via one-shot learning where only one example is required. We demonstrate the superiority of the proposed method in generating high-quality stylized text over the state-of-the-art methods.
Style transfer is the task of migrating styles from an example style image to a content image, which is closely related to texture synthesis. The pioneering work of @cite_21 demonstrates the powerful representation ability of convolutional neural networks to model textures. Gatys formulated textures as the correlation of deep features in the form of a Gram matrix @cite_0 , and transferred styles by matching high-level representations of the content image and the Gram matrices. Since then, deep-based style transfer has become a hot topic, and many follow-up work improves it in different aspects such as acceleration @cite_13 @cite_9 @cite_24 , user controls @cite_23 and style diversification @cite_18 . In parallel, Li modelled textures by local patches of feature maps, which can transfer photo-realistic styles.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_0", "@cite_24", "@cite_23", "@cite_13" ], "mid": [ "2951745349", "2952226636", "2475287302", "2161208721", "2952767162", "2950078543", "2950689937" ], "abstract": [ "This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.", "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced on-line iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by conducting much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.", "Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ] }
1812.06384
2951389634
Text effects transfer technology automatically makes the text dramatically more impressive. However, previous style transfer methods either study the model for general style, which cannot handle the highly-structured text effects along the glyph, or require manual design of subtle matching criteria for text effects. In this paper, we focus on the use of the powerful representation abilities of deep neural features for text effects transfer. For this purpose, we propose a novel Texture Effects Transfer GAN (TET-GAN), which consists of a stylization subnetwork and a destylization subnetwork. The key idea is to train our network to accomplish both the objective of style transfer and style removal, so that it can learn to disentangle and recombine the content and style features of text effects images. To support the training of our network, we propose a new text effects dataset with as much as 64 professionally designed styles on 837 characters. We show that the disentangled feature representations enable us to transfer or remove all these styles on arbitrary glyphs using one network. Furthermore, the flexible network design empowers TET-GAN to efficiently extend to a new text style via one-shot learning where only one example is required. We demonstrate the superiority of the proposed method in generating high-quality stylized text over the state-of-the-art methods.
Image-to-image translation is a domain transfer problem, where the input and output are both images. Driven by the great advances of GAN, once been introduced by @cite_5 , it has been widely studied. Recent work @cite_8 has been able to generate very high-resolution photo-realistic images from semantic label maps. Zhu proposed a novel cycle loss to learn the domain translation without paired input-output examples. While most researches focus on the translation between two domains, Choi utilized a one-hot vector to specify the target domain, so that the network can learn the mapping between multiple domains, which provides more flexibility. However, extension to new domains is still expensive. In this paper, we introduce a self-stylization training scheme to efficiently learn a new style with only one example required.
{ "cite_N": [ "@cite_5", "@cite_8" ], "mid": [ "2552465644", "2772222351" ], "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-toimage translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets." ] }
1812.06384
2951389634
Text effects transfer technology automatically makes the text dramatically more impressive. However, previous style transfer methods either study the model for general style, which cannot handle the highly-structured text effects along the glyph, or require manual design of subtle matching criteria for text effects. In this paper, we focus on the use of the powerful representation abilities of deep neural features for text effects transfer. For this purpose, we propose a novel Texture Effects Transfer GAN (TET-GAN), which consists of a stylization subnetwork and a destylization subnetwork. The key idea is to train our network to accomplish both the objective of style transfer and style removal, so that it can learn to disentangle and recombine the content and style features of text effects images. To support the training of our network, we propose a new text effects dataset with as much as 64 professionally designed styles on 837 characters. We show that the disentangled feature representations enable us to transfer or remove all these styles on arbitrary glyphs using one network. Furthermore, the flexible network design empowers TET-GAN to efficiently extend to a new text style via one-shot learning where only one example is required. We demonstrate the superiority of the proposed method in generating high-quality stylized text over the state-of-the-art methods.
Text is one of the most important visual elements in our daily life and there is some work on style transfer specific to the text. Taking advantage of the accessibility of abundant font images, many works @cite_25 @cite_16 @cite_20 trained neural networks to learn stroke styles for font transfer. However, another type of style, namely text effects, was not studied much. It was not until 2017 that the work of @cite_10 first raised text effects transfer problem. The authors proposed to match and synthesize image patches based on their correlated position on the glyph, which is vulnerable to glyph differences and has a heavy computational burden. Meanwhile, Azadi combined font transfer and text effects transfer using two successive subnetworks and end-to-end trained them using a synthesized gradient font dataset. However, they can only handle @math capital letters with a small size of @math , and their synthesized dataset differs greatly from the actual text effects. By contrast, we build our dataset using in-the-wild text effects with a size of @math , supporting our network to render exquisite text effects for any glyph.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2780436321", "2558128964", "2559635221", "2770449959" ], "abstract": [ "Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities. In this paper, we propose a novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly generate Chinese characters. Specifically, we propose to capture the different characteristics of a Chinese character by disentangling the latent features into content-related and style-related components. Considering of the complex shapes and structures, we incorporate the structure information as prior knowledge into our framework to guide the generation. Our framework shows a powerful one-shot low-shot generalization ability by inferring the style component given a character with unseen style. To the best of our knowledge, this is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. Extensive experiments demonstrate its effectiveness in generating different stylized Chinese characters by fusing the feature vectors corresponding to different contents and styles, which is of significant importance in real-world applications.", "In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation.", "Generating personal handwriting fonts with large amounts of characters is a boring and time-consuming task. Take Chinese fonts as an example, the official standard GB18030-2000 for commercial font products contains 27533 simplified Chinese characters. Consistently and correctly writing out such huge amounts of characters is usually an impossible mission for ordinary people. To solve this problem, we propose a handy system to automatically synthesize personal handwritings for all characters (e.g., Chinese) in the font library by learning style from a small number (as few as 1 ) of carefully-selected samples written by an ordinary person. Experiments including Turing tests with 69 participants demonstrate that the proposed system generates high-quality synthesis results which are indistinguishable from original handwritings. Using our system, for the first time the practical handwriting font library in a user's personal style with arbitrarily large numbers of Chinese characters can be generated automatically.", "Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special multi-task' learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Mutation testing is an old concept which has seen many contributions over years. Jia and Harman propose a survey regarding this topic @cite_13 . In this section, we focus on the work that is related to ours. The most related work has already been discussed in .
{ "cite_N": [ "@cite_13" ], "mid": [ "2135841285" ], "abstract": [ "Mutation Testing is a fault-based software testing technique that has been widely studied for over three decades. The literature on Mutation Testing has contributed a set of approaches, tools, developments, and empirical results. This paper provides a comprehensive analysis and survey of Mutation Testing. The paper also presents the results of several development trend analyses. These analyses provide evidence that Mutation Testing techniques and tools are reaching a state of maturity and applicability, while the topic of Mutation Testing itself is the subject of increasing interest." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Strug and Strug @cite_19 use control flow graphs and classification for detecting similar mutants. Their approach is intended to reduce the number of mutants considered when doing mutation testing. We use these tools for change impact analysis.
{ "cite_N": [ "@cite_19" ], "mid": [ "11644230" ], "abstract": [ "This paper deals with an approach based on the similarity of mutants. This similarity is used to reduce the number of mutants to be executed. In order to calculate such a similarity among mutants their structure is used. Each mutant is converted into a hierarchical graph, which represents the program’s flow, variables and conditions. On the basis of this graph form a special graph kernel is defined to calculate similarity among programs. It is then used to predict whether a given test would detect a mutant or not. The prediction is carried out with the help of a classification algorithm. This approach should help to lower the number of mutants which have to be executed. An experimental validation of this approach is also presented in this paper. An example of a program used in experiments is described and the results obtained, especially classification errors, are presented." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Do and Rothermel @cite_17 describe a protocol to study test case prioritization techniques based on mutation. Their protocol and ours share the same idea, that of using test cases to determine which test cases are impacted by the change. However, we have a different goal: they study test case prioritization whereas we study impact prediction.
{ "cite_N": [ "@cite_17" ], "mid": [ "2101780873" ], "abstract": [ "Regression testing is an important part of software maintenance, but it can also be very expensive. To reduce this expense, software testers may prioritize their test cases so that those that are more important are run earlier in the regression testing process. Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited to hand-seeded faults, primarily due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults. We have therefore designed and performed a controlled experiment to assess the ability of prioritization techniques to improve the rate of fault detection techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. We also compare our results to those collected earlier with respect to the relationship between hand-seeded faults and mutation faults, and the implications this has for researchers performing empirical studies of prioritization." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Change impact analysis has been studied for many years and many algorithms have been proposed. Many categorizations of such algorithms exist. Bohner and Arnold proposed two types of analysis: dependency analysis and traceability analysis @cite_24 . The former analyzes the source code of the program at a relatively fine granularity ( methods call, data usage, control statements, ) while the latter compares elements at a coarser granularity such as documentation and specifications ( UML, ). Moreover, different types of impact determination techniques are presented. According to this paper, our approach is a dependency analysis based on a transitive closure technique.
{ "cite_N": [ "@cite_24" ], "mid": [ "1548254758" ], "abstract": [ "From the Publisher: As software systems become increasingly large and complex, the need increases to predict and control the effects of software changes. Software Change Impact Analysis captures the latest information on the science and art of determining what software parts affect each other. It provides a battery of ideas for doing impact analysis better, presents a framework for the field, and focuses attention on important results. You will gain a healthy respect for the strengths and limitations of impact analysis technology and a solid background that will prove valuable for years to come. The book identifies key impact analysis definitions and themes and illustrates the important themes to give you a solid understanding for tackling impact analysis problems. It includes reports on software source code dependency analysis and software traceability analysis and shows how results from both areas can more effectively support impact analysis in software engineering repositories. It also describes why impact representation and determination techniques are at the heart of both source dependency analysis and traceability analysis." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Bohner and Arnold @cite_24 and Li al @cite_1 list the notable graph-based approaches. Different types or variants of software graphs have been used to perform change impact analysis, a common example is the program dependence graphs ( PDG) @cite_11 . In the present paper, we focus on the call graph.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_11" ], "mid": [ "1548254758", "2111403266", "2169063818" ], "abstract": [ "From the Publisher: As software systems become increasingly large and complex, the need increases to predict and control the effects of software changes. Software Change Impact Analysis captures the latest information on the science and art of determining what software parts affect each other. It provides a battery of ideas for doing impact analysis better, presents a framework for the field, and focuses attention on important results. You will gain a healthy respect for the strengths and limitations of impact analysis technology and a solid background that will prove valuable for years to come. The book identifies key impact analysis definitions and themes and illustrates the important themes to give you a solid understanding for tackling impact analysis problems. It includes reports on software source code dependency analysis and software traceability analysis and shows how results from both areas can more effectively support impact analysis in software engineering repositories. It also describes why impact representation and determination techniques are at the heart of both source dependency analysis and traceability analysis.", "SUMMARY Software change impact analysis (CIA) is a technique for identifying the effects of a change, or estimating what needs to be modified to accomplish a change. Since the 1980s, there have been many investigations on CIA, especially for code-based CIA techniques. However, there have been very few surveys on this topic. This article tries to fill this gap. And 30 papers that provide empirical evaluation on 23 code-based CIA techniques are identified. Then, data was synthesized against four research questions. The study presents a comparative framework including seven properties, which characterize the CIA techniques, and identifies key applications of CIA techniques in software maintenance. In addition, the need for further research is also presented in the following areas: evaluating existing CIA techniques and proposing new CIA techniques under the proposed framework, developing more mature tools to support CIA, comparing current CIA techniques empirically with unified metrics and common benchmarks, and applying the CIA more extensively and effectively in the software maintenance phase. Copyright © 2012 John Wiley & Sons, Ltd.", "Dependence analysis is useful for software maintenance because it indicates the possible effects of a software modification on the rest of a program. This helps the software maintainer evaluate the appropriateness of a software modification, drive regression testing, and determine the vulnerability of critical sections of code. A definition of interprocedural dependence analysis is given, and its implementation in a prototype tool that supports software maintenance is described. >" ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Walker al @cite_21 propose an impact analysis tool named TRE. Their approach uses conditional probability dependency graphs in which a node represents a class, there is an edge from a node class A to node B if A contains anything resolving to B. The conditional probabilities are estimated from data extracted from the CVS repository; more precisely, these conditional probabilities are estimated by the number of times two classes are changed on the same commit. Then, the impact of a change is determined based on the resulting graphical model. They work at the level of classes and give no concrete information about the evaluation. In contrast to this work, we work at a finer granularity (methods) which gives us more realistic data and we report numerical evidence for Java packages.
{ "cite_N": [ "@cite_21" ], "mid": [ "2156555135" ], "abstract": [ "An evolutionary development approach is increasingly commonplace in industry but presents increased difficulties in risk management, for both technical and organizational reasons. In this context, technical risk is the product of the probability of a technical event and the cost of that event. This paper presents a technique for more objectively assessing and communicating technical risk in an evolutionary development setting that (1) operates atop weakly-estimated knowledge of the changes to be made, (2) analyzes the past change history and current structure of a system to estimate the probability of change propagation, and (3) can be discussed vertically within an organization both with development staff and high-level management. A tool realizing this technique has been developed for the Eclipse IDE." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Zimmermann and Nagappan @cite_9 propose to use dependency graphs to estimate the most critical parts of a piece of software. Their approach uses network measures and complexity metrics to make the predictions. They assess their findings using some popular though proprietary software, where they are able to determine parts of the software that can cause issues. In contrast, we propose a technique to determine which parts of a piece of software will be impacted by a potential change. Moreover, we experiment our approach on different open-source software packages.
{ "cite_N": [ "@cite_9" ], "mid": [ "2135198476" ], "abstract": [ "In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10 points higher than for models built from complexity metrics. In addition, network measures could identify 60 of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Antoniol al @cite_30 also address impact analysis. However, they consider a slightly different problem setting, because they take as input a bug report or a modification request and not a single source code element as we do. Their approach is less accurate as it takes into consideration documentation ( bug reports) for change impact analysis. Our approach is more realistic as it is source-code centric: we only deal with existing elements obtained from source code. The same argument applies for the recent work by Gethers and colleagues @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_7" ], "mid": [ "2148069233", "2104074028" ], "abstract": [ "This paper deals with impact analysis and proposes a method based on information retrieval techniques to trace the text of a maintenance request onto the set of system components initially affected by the maintenance request. The correct identification of such components is crucial for the future design and implementation of the change. The paper also discusses results from a preliminary case study where two different information retrieval approaches have been used to retrieve the software documents relevant for a maintenance request.", "The paper presents an adaptive approach to perform impact analysis from a given change request to source code. Given a textual change request (e.g., a bug report), a single snapshot (release) of source code, indexed using Latent Semantic Indexing, is used to estimate the impact set. Should additional contextual information be available, the approach configures the best-fit combination to produce an improved impact set. Contextual information includes the execution trace and an initial source code entity verified for change. Combinations of information retrieval, dynamic analysis, and data mining of past source code commits are considered. The research hypothesis is that these combinations help counter the precision or recall deficit of individual techniques and improve the overall accuracy. The tandem operation of the three techniques sets it apart from other related solutions. Automation along with the effective utilization of two key sources of developer knowledge, which are often overlooked in impact analysis at the change request level, is achieved. To validate our approach, we conducted an empirical evaluation on four open source software systems. A benchmark consisting of a number of maintenance issues, such as feature requests and bug fixes, and their associated source code changes was established by manual examination of these systems and their change history. Our results indicate that there are combinations formed from the augmented developer contextual information that show statistically significant improvement over stand-alone approaches." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
A classical paper by Moritoni and Winkler @cite_26 also studies error propagation but they do it with the goal of having a perfect aproximation. By contrast, we perform approximations with the goal of exploring other trade-offs between precision and recall for impact prediction. Their work is more theoretical in essence, only on small toy examples, whereas we propose a study on real large-scale open-source source code.
{ "cite_N": [ "@cite_26" ], "mid": [ "2167881892" ], "abstract": [ "It is pointed out that the incremental cost of a change to a program is often disproportionately high because of inadequate means of determining the semantic effects of the change. A practical logical technique for finding the semantic effects of changes through a direct analysis of the program is presented. The programming language features considered include parametrized modules, procedures, and global variables. The logic described is approximate in that weak (conservative) results sometimes are inferred. Isolating the exact effects of a change is undecidable in general. The basis for an approximation is a structural interpretation of the information-flow relationships among program objects. The approximate inference system is concise, abstract, extensible, and decidable, giving it significant advantages over the main alternative formalizations. The authors' implementation of the logic records the justification for each dependency to facilitate the interpretation of results. >" ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Michael and Jones @cite_31 alter variables during the program's execution in order to study how this affects ( perturbates'' in their phrasing) the software. They focus on data-state perturbation, where we have a more global look of the software. Considering only variable perturbation does not take into consideration all the ways an error can propagate. According to our experiments, call egdes better reflect propagation than variable edges.
{ "cite_N": [ "@cite_31" ], "mid": [ "1928546442" ], "abstract": [ "This paper presents an empirical study of an important aspect of software defect behavior: the propagation of data-state errors. A data-state error occurs when a fault is executed and affects a program's data-state, and it is said to propagate if it affects the outcome of the execution. Our results show that data-state errors appear to have a property that is quite useful when simulating faulty code: for a given input, it appears that either all data state errors injected at a given location tend to propagate to the output, or else none of them do. These results are interesting because of what they indicate about the behavior of data-state errors in software. They suggest that data state errors behave in an orderly way, and that the behavior of software may not be as unpredictable as it could theoretically be. Additionally, if all faults behave the same for a given input and a given location, then one can use simulation to get a good picture of how faults behave, regardless of whether the faults one has simulated are representative of real faults." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Challet and Lombardoni @cite_28 propose a theoretical reflection about impact analysis using graphs. However, they do not evaluate the validity of their bug basins'' as we do in this paper.
{ "cite_N": [ "@cite_28" ], "mid": [ "1955700867" ], "abstract": [ "We address the issue of how software components are affected by the failure of one of them, and the inverse problem of locating the faulty component. Because of the functional form of the incoming link distribution of software dependence network, software is fragile with respect to the failure of a random single component. Locating a faulty component is easy if the failure only affects its nearest neighbors, while it is hard if it propagates further." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Robillard and Murphy @cite_32 introduce concern graphs'' for reasoning on the implementation of features. This kind of graphs may be assessed with the protocol we have presented here.
{ "cite_N": [ "@cite_32" ], "mid": [ "2118944299" ], "abstract": [ "Many maintenance tasks address concerns, or features, that are not well modularized in the source code comprising a system. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code, complicating the analysis of the concerns. In this paper, we introduce the concern graph representation that abstracts the implementation details of a concern and makes explicit the relationships between different parts of the concern. The abstraction used in a Concern Graph has been designed to allow an obvious and inexpensive mapping back to the corresponding source code. To investigate the practical tradeoffs related to this approach, we have built the feature exploration and analysis tool (FEAT) that allows a developer to manipulate a concern representation extracted from a Java system, and to analyze the relationships of that concern to the code base. We have used this tool to find and describe concerns related to software change tasks. We have performed case studies to evaluate the feasibility, usability, and scalability of the approach. Our results indicate that concern graphs can be used to document a concern for change, that developers unfamiliar with concern graphs can use them effectively, and that the underlying technology scales to industrial-sized programs." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Binkley al @cite_6 @cite_18 propose observation-based slicing ( ORBS). They propose to slice a piece of software in a delete--execute--observe'' paradigm. In this paradigm, the effects of a change are observed after executing the code ( by running test cases). This paradigm is comparable to our approach where we mutate and then run tests to observe the impacts. However, these two techniques are totally different from each other: their technique focuses on a quite low granularity (statements) which makes their approach resource demanding. Our approach has the advantage to be light enough so that one can use it to do run time prediction.
{ "cite_N": [ "@cite_18", "@cite_6" ], "mid": [ "2150038380", "2167153801" ], "abstract": [ "Observation-based slicing is a recently-introduced, language-independent slicing technique based on the dependencies observable from program behaviour. Due to the well-known limits of dynamic analysis, we may only compute an under-approximation of the true observation-based slice. However, because the observation-based slice captures all possible dependence that can be observed, even such approximations can yield insight into the limitations of static slicing. For example, a static slice, S, that is strictly smaller than the corresponding observation based slice is potentially unsafe. We present the results of three sets of experiments on 12 different programs, including benchmarks and larger programs, which investigate the relationship between static and observation-based slicing. We show that, in extreme cases, observation-based slices can find the true minimal static slice, where static techniques cannot. For more typical cases, our results illustrate the potential for observation-based slicing to highlight limitations in static slicers. Finally, we report on the sensitivity of observation-based slicing to test quality.", "Current slicing techniques cannot handle systems written in multiple programming languages. Observation-Based Slicing (ORBS) is a language-independent slicing technique capable of slicing multi-language systems, including systems which contain (third party) binary components. A potential slice obtained through repeated statement deletion is validated by observing the behaviour of the program: if the slice and original program behave the same under the slicing criterion, the deletion is accepted. The resulting slice is similar to a dynamic slice. We evaluate five variants of ORBS on ten programs of different sizes and languages showing that it is less expensive than similar existing techniques. We also evaluate it on bash and four other systems to demonstrate feasible large-scale operation in which a parallelised ORBS needs up to 82 less time when using four threads. The results show that an ORBS slicer is simple to construct, effective at slicing, and able to handle systems written in multiple languages without specialist analysis tools." ] }
1812.06286
2503567466
In software engineering, impact analysis consists in predicting the software elements (e.g. modules, classes, methods) potentially impacted by a change in the source code. Impact analysis is required to optimize the testing effort. In this paper, we propose a framework to predict error propagation. Based on 10 open-source Java projects and 5 classical mutation operators, we create 17000 mutants and study how the error they introduce propagates. This framework enables us to analyze impact prediction based on four types of call graph. Our results show that the sophistication indeed increases completeness of impact prediction. However, and surprisingly to us, the most basic call graph gives the highest trade-off between precision and recall for impact prediction.
Ren al @cite_15 propose a tool entitled Chianti for change impact prediction as an Eclipse plug-in. However, beyond the common idea of reasoning about impacts, they target a completely different problem: we aim at finding sensitive methods, while they aim at finding the change responsible for a failure ( a bug-inducing commit). Naturally, their techniques and evaluation follow completely different paths.
{ "cite_N": [ "@cite_15" ], "mid": [ "2038899190" ], "abstract": [ "This paper reports on the design and implementation of Chianti, a change impact analysis tool for Java that is implemented in the context of the Eclipse environment. Chianti analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected (regression or unit) tests whose execution behavior may have been modified by the applied changes. For each affected test, Chianti also determines a set of affecting changes that were responsible for the test's modified behavior. This latter step of isolating the changes that induce the failure of one specific test from those changes that only affect other tests can be used as a debugging technique in situations where a test fails unexpectedly after a long editing session. We evaluated Chianti on a year (2002) of CVS data from M. Ernst's Daikon system, and found that, on average, 52 of Daikon's unit tests are affected. Furthermore, each affected unit test, on average, is affected by only 3.95 of the atomic changes. These findings suggest that our change impact analysis is a promising technique for assisting developers with program understanding and debugging." ] }
1812.06162
2903697572
In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.
Recent papers have probed the limits of large batch training empirically, especially for ImageNet @cite_24 @cite_26 @cite_9 , in some cases using layer-wise adaptive learning-rates @cite_21 . More recent work has demonstrated that large batch training can also be applied to RL @cite_56 @cite_44 @cite_18 @cite_11 . The use of second order optimization methods @cite_36 might increase the utility of data parallelism even further. A thorough review of large batch training and potential issues with generalization was provided in a very nice recent empirical study @cite_30 done in parallel with this work. @cite_55 also systematically studied large-batch training, though it did not tune the learning rate separately for each batch size.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_36", "@cite_9", "@cite_21", "@cite_55", "@cite_56", "@cite_24", "@cite_44", "@cite_11" ], "mid": [ "2900167092", "2793035934", "2755682530", "", "", "2757910899", "", "", "2769856846", "", "2786928559" ], "abstract": [ "Recent hardware developments have made unprecedented amounts of data parallelism available for accelerating neural network training. Among the simplest ways to harness next-generation accelerators is to increase the batch size in standard mini-batch neural network training algorithms. In this work, we aim to experimentally characterize the effects of increasing the batch size on training time, as measured in the number of steps necessary to reach a goal out-of-sample error. Eventually, increasing the batch size will no longer reduce the number of training steps required, but the exact relationship between the batch size and how many training steps are necessary is of critical importance to practitioners, researchers, and hardware designers alike. We study how this relationship varies with the training algorithm, model, and data set and find extremely large variation between workloads. Along the way, we reconcile disagreements in the literature on whether batch size affects model quality. Finally, we discuss the implications of our results for efforts to train neural networks much faster in the future.", "Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire NVIDIA DGX-1 to learn successful strategies in Atari games in single-digit minutes, using both synchronous and asynchronous algorithms.", "Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 GPU takes 14 days. This training requires 10^18 single precision operations in total. On the other hand, the world's current fastest supercomputer can finish 2 * 10^17 single precision operations per second ( 2017, this https URL). If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in one minute. However, the current bottleneck for fast DNN training is in the algorithm level. Specifically, the current batch size (e.g. 512) is too small to make efficient use of many processors. For large-scale DNN training, we focus on using large-batch data-parallelism synchronous SGD without losing accuracy in the fixed epochs. The LARS algorithm (You, Gitman, Ginsburg, 2017, arXiv:1708.03888) enables us to scale the batch size to extremely large case (e.g. 32K). We finish the 100-epoch ImageNet training with AlexNet in 11 minutes on 1024 CPUs. About three times faster than Facebook's result ( 2017, arXiv:1706.02677), we finish the 90-epoch ImageNet training with ResNet-50 in 20 minutes on 2048 KNLs without losing accuracy. State-of-the-art ImageNet training speed with ResNet-50 is 74.9 top-1 test accuracy in 15 minutes. We got 74.9 top-1 test accuracy in 64 epochs, which only needs 14 minutes. Furthermore, when we increase the batch size to above 16K, our accuracy is much higher than Facebook's on corresponding batch sizes. Our source code is available upon request.", "", "", "A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difficulties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy.", "", "", "We demonstrate that training ResNet-50 on ImageNet for 90 epochs can be achieved in 15 minutes with 1024 Tesla P100 GPUs. This was made possible by using a large minibatch size of 32k. To maintain accuracy with this large minibatch size, we employed several techniques such as RMSprop warm-up, batch normalization without moving averages, and a slow-start learning rate schedule. This paper also describes the details of the hardware and software of the system used to achieve the above performance.", "", "We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time." ] }
1812.06162
2903697572
In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.
Other recent work has explored the impact of gradient noise on optimization speed and batch size selection. @cite_16 connected gradient noise and the locally optimal step size to identify an adaptive learning rate. @cite_19 derived a sampling distribution for SGD, motivating our definition of temperature'. @cite_59 connected this temperature to the critical batch size, though they predict a dependence on dataset size which we do not observe. @cite_58 identified a signal-dominated and noise-dominated phase of training. @cite_25 showed that decaying the learning rate and increasing the batch size have the same effect, motivated by the SGD training temperature. ( @cite_7 also suggested increasing learning rate and batch size together, but with different motivation.) @cite_42 empirically investigated the role of gradient noise in reinforcement learning.
{ "cite_N": [ "@cite_7", "@cite_42", "@cite_58", "@cite_19", "@cite_59", "@cite_16", "@cite_25" ], "mid": [ "2773689216", "", "2593634001", "2962915600", "2765733029", "2120420045", "2766164908" ], "abstract": [ "Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size. While smaller batch sizes generally converge in fewer training epochs, larger batch sizes offer more parallelism and hence better computational efficiency. We have developed a new training approach that, rather than statically choosing a single batch size for all epochs, adaptively increases the batch size during the training process. Our method delivers the convergence rate of small batch sizes while achieving performance similar to large batch sizes. We analyse our approach using the standard AlexNet, ResNet, and VGG networks operating on the popular CIFAR-10, CIFAR-100, and ImageNet datasets. Our results demonstrate that learning with adaptive batch sizes can improve performance by factors of up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1 relative to training with fixed batch sizes.", "", "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the ; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on ph compression of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.", "", "This paper tackles two related questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work is inspired by (2017), who showed deep networks can easily memorize randomly labeled training data, despite generalizing well when shown real labels of the same inputs. We show here that the same phenomenon occurs in small linear models. These observations are explained by evaluating the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also explore the \"generalization gap\" between small and large batch training, identifying an optimum batch size which maximizes the test set accuracy. Interpreting stochastic gradient descent as a stochastic differential equation, we identify a \"noise scale\" @math , where @math is the learning rate, @math training set size and @math batch size. Consequently the optimum batch size is proportional to the learning rate and the training set size, @math . We verify these predictions empirically.", "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to @math validation accuracy in under 30 minutes." ] }