aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1907.12413 | 2965312352 | Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views. | Recent years have seen a rise of interactive machine learning @cite_41 and such techniques are now commonly integrated into visual analytics systems, as recently surveyed by @cite_50 . Often, they are used to learn model refinements from user interaction @cite_42 or provide @cite_40 . Semantic interactions are typically performed with the intent of refining or steering a machine-learning model. In , expert users perform semantic interactions, as their primary goal is the annotation of argumentation. The result is a concealed machine teaching process @cite_14 that is not an end in itself, but a by-product'' of the annotation. | {
"cite_N": [
"@cite_14",
"@cite_41",
"@cite_42",
"@cite_40",
"@cite_50"
],
"mid": [
"2006710607",
"2122514299",
"2794719971",
"2963611534",
"2785325870"
],
"abstract": [
"Machine learning offers a range of tools for training systems from data, but these methods are only as good as the underlying representation. This paper proposes to acquire representations for machine learning by reading text written to accommodate human learning. We propose a novel form of semantic analysis called reading to learn, where the goal is to obtain a high-level semantic abstract of multiple documents in a representation that facilitates learning. We obtain this abstract through a generative model that requires no labeled data, instead leveraging repetition across multiple documents. The semantic abstract is converted into a transformed feature space for learning, resulting in improved generalization on a relational learning task.",
"Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.",
"Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.",
"Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines.",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ."
]
} |
1907.12413 | 2965312352 | Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views. | Several systems combine the close and distant reading metaphors to provide deeper insights into textual data, such as @cite_36 or @cite_43 . @cite_25 have developed a tool called , which combines focus- and context-techniques to support the analysis of large text documents. The tool enables exploration of text through novel navigation methods and allows the extraction of entities and other concepts. places all close and distant-reading views next to each other, following the metaphor by W "orner and Ertl @cite_51 . instead stacks'' the different views into task-dependent layers. | {
"cite_N": [
"@cite_36",
"@cite_43",
"@cite_51",
"@cite_25"
],
"mid": [
"2012118336",
"2051088039",
"2016630033",
"1493108551"
],
"abstract": [
"Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.",
"Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.",
"This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection.",
"This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection."
]
} |
1907.12413 | 2965312352 | Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views. | In recent years, several web-based interfaces have been created to support users in various text annotation tasks. For example, @cite_46 can be used for the annotation of POS tags or named entities. In this interface, annotations are made directly in the text by dragging the mouse over multiple words or clicking on a single word. employs the same interactions for text annotation. Another web-based annotation tool is called @cite_65 ; it allows annotations of named entities and their relations. @cite_49 use automatic entity extraction for annotating relationships between media streams. TimeLineCurator @cite_28 automatically extracts temporal events from unstructured text data and enables users to curate them in a visual, annotated timeline. @cite_54 have presented a collaborative text annotation framework and emphasize the importance of pre-annotation to significantly reduce annotation costs. have presented a framework that creates BRAT-compatible pre-annotations @cite_22 and discuss (dis-)advantages of pre-annotation. The initial suggestions of could be seen as pre-annotations, but are automatically updated after each interaction. | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_54",
"@cite_65",
"@cite_49",
"@cite_46"
],
"mid": [
"2251026067",
"1493490255",
"1784290353",
"2136409007",
"2124634352",
"2087501015"
],
"abstract": [
"In this paper, we present a flexible approach to the efficient and exhaustive manual annotation of text documents. For this purpose, we extend WebAnno (, 2013) an open-source web-based annotation tool. 1 While it was previously limited to specific annotation layers, our extension allows adding and configuring an arbitrary number of layers through a web-based UI. These layers can be annotated separately or simultaneously, and support most types of linguistic annotations such as spans, semantic classes, dependency relations, lexical chains, and morphology. Further, we tightly integrate a generic machine learning component for automatic annotation suggestions of span annotations. In two case studies, we show that automatic annotation suggestions, combined with our split-pane UI concept, significantly reduces annotation time.",
"Traditionally, Information Extraction (IE) has focused on satisfying precise, narrow, pre-specified requests from small homogeneous corpora (e.g., extract the location and time of seminars from a set of announcements). Shifting to a new domain requires the user to name the target relations and to manually create new extraction rules or hand-tag new training examples. This manual labor scales linearly with the number of target relations. This paper introduces Open IE (OIE), a new extraction paradigm where the system makes a single data-driven pass over its corpus and extracts a large set of relational tuples without requiring any human input. The paper also introduces TEXTRUNNER, a fully implemented, highly scalable OIE system where the tuples are assigned a probability and indexed to support efficient extraction and exploration via user queries. We report on experiments over a 9,000,000 Web page corpus that compare TEXTRUNNER with KNOWITALL, a state-of-the-art Web IE system. TEXTRUNNER achieves an error reduction of 33 on a comparable set of extractions. Furthermore, in the amount of time it takes KNOWITALL to perform extraction for a handful of pre-specified relations, TEXTRUNNER extracts a far broader set of facts reflecting orders of magnitude more relations, discovered on the fly. We report statistics on TEXTRUNNER's 11,000,000 highest probability tuples, and show that they contain over 1,000,000 concrete facts and over 6,500,000 more abstract assertions.",
"Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-the-art named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29 F-measure.",
"Today people typically read and annotate printed documents even if they are obtained from electronic sources like digital libraries If there is a reason for them to share these personal annotations online, they must re-enter them. Given the advent of better computer support for reading and annotation, including tablet interfaces, will people ever share their personal digital ink annotations as is, or will they make substantial changes to them? What can we do to anticipate and support the transition from personal to public annotations? To investigate these questions, we performed a study to characterize and compare students' personal annotations as they read assigned papers with those they shared with each other using an online system. By analyzing over 1, 700 annotations, we confirmed three hypotheses: (1) only a small fraction of annotations made while reading are directly related to those shared in discussion; (2) some types of annotations - those that consist of anchors in the text coupled with margin notes - are more apt to be the basis of public commentary than other types of annotations; and (3) personal annotations undergo dramatic changes when they are shared in discussion, both in content and in how they are anchored to the source document. We then use these findings to explore ways to support the transition from personal to public annotations.",
"Many corpus-based natural language processing systems rely on text corpora that have been manually annotated with syntactic or semantic tags. In particular, all previous dictionary construction systems for information extraction have used an annotated training corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting AutoSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experiments with the MUG-4 terrorism domain, AutoSlog-TS created a dictionary of extraction patterns that performed comparably to a dictionary created by AutoSlog, using only preclassified texts as input.",
"Automatic image annotation is still an important open problem in multimedia and computer vision. The success of media sharing websites has led to the availability of large collections of images tagged with human-provided labels. Many approaches previously proposed in the literature do not accurately capture the intricate dependencies between image content and annotations. We propose a learning procedure based on Kernel Canonical Correlation Analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. The learned mapping is then used to annotate new images using advanced nearest-neighbor voting methods. We evaluate our approach on three popular datasets, and show clear improvements over several approaches relying on more standard representations."
]
} |
1907.12336 | 2965762627 | Automatic data abstraction is an important capability for both benchmarking machine intelligence and supporting summarization applications. In the former one asks whether a machine can ‘understand’ enough about the meaning of input data to produce a meaningful but more compact abstraction. In the latter this capability is exploited for saving space or human time by summarizing the essence of input data. In this paper we study a general reinforcement learning based framework for learning to abstract sequential data in a goal-driven way. The ability to define different abstraction goals uniquely allows different aspects of the input data to be preserved according to the ultimate purpose of the abstraction. Our reinforcement learning objective does not require human-defined examples of ideal abstraction. Importantly our model processes the input sequence holistically without being constrained by the original input order. Our framework is also domain agnostic – we demonstrate applications to sketch, video and text data and achieve promising results in all domains. | Sketch recognition Early sketch recognition methods were developed to deal with professionally drawn sketches as in CAD or artistic drawings @cite_43 @cite_45 @cite_44 . The more challenging task of free-hand sketch recognition was first tackled in @cite_36 along with the release of the first large-scale dataset of amateur sketches. Since then the task has been well-studied using both classic vision @cite_3 @cite_24 as well as deep learning approaches @cite_53 . Recent successful deep learning approaches have spanned both primarily non-sequential CNN @cite_53 @cite_30 and sequential RNN @cite_39 @cite_10 recognizers. We use both CNN and RNN-based multi-class classifiers to provide rewards for our RL based sketch abstraction framework. | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_53",
"@cite_3",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_24",
"@cite_45",
"@cite_10"
],
"mid": [
"2549858052",
"2493181180",
"2743832495",
"2322404333",
"2783270818",
"2027125558",
"2048568806",
"2797225766",
"2515723519",
"2949877834"
],
"abstract": [
"The paper presents a deep Convolutional Neural Network (CNN) framework for free-hand sketch recognition. One of the main challenges in free-hand sketch recognition is to increase the recognition accuracy on sketches drawn by different people. To overcome this problem, we use deep Convolutional Neural Networks (CNNs) that have dominated top results in the field of image recognition. And we use the contours of natural images for training, because sketches drawn by different people may be very different and databases of the sketch images for training are very limited. We propose a CNN training on contours that performs well on sketch recognition over different databases of the sketch images. And we make some adjustments to the contours for training and reach higher recognition accuracy. Experimental results show the effectiveness of the proposed approach.",
"We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from.",
"Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.",
"The recognition of pen-based visual patterns such as sketched symbols is amenable to supervised machine learning models such as neural networks. However, a sizable, labeled training corpus is often required to learn the high variations of freehand sketches. To circumvent the costs associated with creating a large training corpus, improve the recognition accuracy with only a limited amount of training samples and accelerate the development of sketch recognition system for novel sketch domains, we present a neural network training protocol that consists of three steps. First, a large pool of unlabeled, synthetic samples are generated from a small set of existing, labeled training samples. Then, a Deep Belief Network (DBN) is pre-trained with those synthetic, unlabeled samples. Finally, the pre-trained DBN is fine-tuned using the limited amount of labeled samples for classification. The training protocol is evaluated against supervised baseline approaches such as the nearest neighbor classifier and the neural network classifier. The benchmark data sets used are partitioned such that there are only a few labeled samples for training, yet a large number of labeled test cases featuring rich variations. Results suggest that our training protocol leads to a significant error reduction compared to the baseline approaches.",
"We start by asking an interesting yet challenging question, \"If an eyewitness can only recall the eye features of the suspect, such that the forensic artist can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig. 1), can advanced computer vision techniques help generate the whole face image?\" A more generalized question is that if a large proportion (e.g., more than 50 ) of the face sketch is missing, can a realistic whole face sketch image still be estimated. Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50 ). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95 missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (r-BTN) that recursively generates a whole face sketch from a small sketch face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.",
"Sketch recognition aims to automatically classify human hand sketches of objects into known categories. This has become increasingly a desirable capability due to recent advances in human computer interaction on portable devices. The problem is nontrivial because of the sparse and abstract nature of hand drawings as compared to photographic images of objects, compounded by a highly variable degree of details in human sketches. To this end, we present a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy. Different local feature representations were evaluated using the star graph model to demonstrate the effectiveness of the ensemble matching of structured features. We further show that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement over the state-of-the-art can be observed. Extensive comparative experiments were carried out using the currently largest sketch dataset released by [15], with over 20,000 sketches of 250 object categories generated by AMT (Amazon Mechanical Turk) crowd-sourcing.",
"Free-hand sketch recognition has become increasingly popular due to the recent expansion of portable touchscreen devices. However, the problem is non-trivial due to the complexity of internal structures that leads to intra-class variations, coupled with the sparsity in visual cues that results in inter-class ambiguities. In order to address the structural complexity, a novel structured representation for sketches is proposed to capture the holistic structure of a sketch. Moreover, to overcome the visual cue sparsity problem and therefore achieve state-of-the-art recognition performance, we propose a Multiple Kernel Learning (MKL) framework for sketch recognition, fusing several features common to sketches. We evaluate the performance of all the proposed techniques on the most diverse sketch dataset to date (, 2012), and offer detailed and systematic analyses of the performance of different features and representations, including a breakdown by sketch-super-category. Finally, we investigate the use of attributes as a high-level feature for sketches and show how this complements low-level features for improving recognition performance under the MKL framework, and consequently explore novel applications such as attribute-based retrieval.",
"Human free-hand sketches have been studied in various contexts including sketch recognition, synthesis and fine-grained sketch-based image retrieval (FG-SBIR). A fundamental challenge for sketch analysis is to deal with drastically different human drawing styles, particularly in terms of abstraction level. In this work, we propose the first stroke-level sketch abstraction model based on the insight of sketch abstraction as a process of trading off between the recognizability of a sketch and the number of strokes used to draw it. Concretely, we train a model for abstract sketch generation through reinforcement learning of a stroke removal policy that learns to predict which strokes can be safely removed without affecting recognizability. We show that our abstraction model can be used for various sketch analysis tasks including: (1) modeling stroke saliency and understanding the decision of sketch recognition models, (2) synthesizing sketches of variable abstraction for a given category, or reference object instance in a photo, and (3) training a FG-SBIR model with photos only, bypassing the expensive photo-sketch pair collection step.",
"Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects.",
"Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects."
]
} |
1907.12383 | 2966710536 | Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions. | Howell @cite_5 study the success factors of 440 ICOs on based on propriety transaction data, presumably acquired from exchanges and other intermediaries, and manual labeling. Their main interest is in the relationship between issuer characteristics and indicators of success. The regression analyses find highly significant positive effects on liquidity and volume of the token for independent variables measuring the existence of a white paper, the availability of code on Github, the support by venture capitalists, the entrepreneurs' experience, the acceptance of Bitcoin, and the organization of a pre-sale. No significant effect is found for airdrops. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2811170310"
],
"abstract": [
"Initial coin offerings (ICOs) have emerged as a new mechanism for entrepreneurial finance, with parallels to initial public offerings, venture capital, and pre-sale crowdfunding. In a sample of more than 1,500 ICOs that collectively raise $12.9 billion, we examine which issuer and ICO characteristics predict success, measured using real outcomes (employment and issuer failure) and financial outcomes (token liquidity and volume). Success is associated with disclosure, credible commitment to the project, and quality signals. An instrumental variables analysis finds that ICO token exchange listing causes higher future employment, indicating that access to liquidity has important real consequences for the enterprise."
]
} |
1907.12383 | 2966710536 | Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions. | Chen @cite_12 identify underpriced instructions (even after the 2016 gas price adjustment) and propose an adaptive pricing scheme. Their main interest is to raise economic barriers against congestion, which in the worst case enables denial of service attacks on the systemic level. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2015555034"
],
"abstract": [
"While electric vehicles (EVs) are expected to provide environmental and economical benefit, judicious coordination of EV charging is necessary to prevent overloading of the distribution grid. Leveraging the smart grid infrastructure, the utility company can adjust the electricity price intelligently for individual customers to elicit desirable load curves. In this context, this paper addresses the problem of predicting the EV charging behavior of the consumers at different prices, which is a prerequisite for optimal price adjustment. The dependencies on price responsiveness among consumers are captured by a conditional random field (CRF) model. To account for temporal dynamics potentially in a strategic setting, the framework of online convex optimization is adopted to develop an efficient online algorithm for tracking the CRF parameters. The proposed model is then used as an input to a stochastic profit maximization module for real-time price setting. Numerical tests using simulated and semi-real data verify the effectiveness of the proposed approach."
]
} |
1907.12383 | 2966710536 | Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions. | Airdrops are a rather new topic. We are aware of one academic paper only. Harrigan @cite_18 raises awareness for privacy implications of airdrops when identifiers of one chain () are reused to distribute coins on another chain (). Sharing identifiers between chains in general gives additional clues for address clustering. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2891594175"
],
"abstract": [
"Airdrops are a popular method of distributing cryptocurrencies and tokens. While often considered risk-free from the point of view of recipients, their impact on privacy is easily overlooked. We examine the Clam airdrop of 2014, a forerunner to many of today's airdrops, that distributed a new cryptocurrency to every address with a non-dust balance on the Bitcoin, Litecoin and Dogecoin blockchains. Specifically, we use address clustering to try to construct the one-to-many mappings from entities to addresses on the blockchains, individually and in combination. We show that the sharing of addresses between the blockchains is a privacy risk. We identify instances where an entity has disclosed information about their address ownership on the Bitcoin, Litecoin and Dogecoin blockchains, exclusively via their activity on the Clam blockchain."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | The term originates in control system theory and measures the degree to which a system's internal state can be determined from its output @cite_21 . In cloud environments, observability indicates to what degree infrastructure and applications and their interactions can be monitored. Outputs used are for example logs, metrics and traces @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_21"
],
"mid": [
"1938602245",
"2134058295"
],
"abstract": [
"Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid.",
"Context: Many metrics are used in software engineering research as surrogates for maintainability of software systems. Aim: Our aim was to investigate whether such metrics are consistent among themselves and the extent to which they predict maintenance effort at the entire system level. Method: The Maintainability Index, a set of structural measures, two code smells (Feature Envy and God Class) and size were applied to a set of four functionally equivalent systems. The metrics were compared with each other and with the outcome of a study in which six developers were hired to perform three maintenance tasks on the same systems. Results: The metrics were not mutually consistent. Only system size and low cohesion were strongly associated with increased maintenance effort. Conclusion: Apart from size, surrogate maintainability measures may not reflect future maintenance effort. Surrogates need to be evaluated in the contexts for which they will be used. While traditional metrics are used to identify problematic areas in the code, the improvements of the worst areas may, inadvertently, lead to more problems for the entire system. Our results suggest that local improvements should be accompanied by an evaluation at the system level."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | @cite_12 investigate the capturing of service execution paths in distributed systems. While capturing the execution path is challenging, as each request may cross many components of several servers, they introduce a generic end-to-end methodology to capture the entire request. During our interviews we found a need for transparency of execution paths as well as more generally interdependencies between services. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2899648294"
],
"abstract": [
"Distributed platforms are widely deployed to provide services in various trades. With the increasing scale and complexity of these distributed platforms, it is becoming more and more challenging to understand and diagnose a service request’s processing in a distributed platform, as even one simple service request may traverse numerous heterogeneous components across multiple hosts. Thus, it is highly demanded to capture the complete end-to-end execution path of service requests among all involved components accurately. This paper presents REPTrace, a generic methodology for capturing the complete request execution path (REP) in a transparent fashion. We propose principles for identifying causal relationships among events for a comprehensive list of execution scenarios, and stitch all events to generate complete request execution paths based on library system calls tracing and network labelling. The experiments on different distributed platforms with different workloads show that REPTrace transparently captures the accurate request execution path with reasonable latency and negligible network overhead."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | The current trend towards more flexible and modular distributed systems is characterized by using independent services, such as micro- or web services. While systems consisting of web services provide better observability than monolithic systems, services have the potential to enhance their observability and monitoring by giving relevant information about their internal behaviour. @cite_2 deal with the challenge that web service definitions do not have any information about their behaviour. They extend the web service definition by adding a behaviour logic description based on a constraint-based model-driven testing approach. During our interviews we identified that the behaviour especially of third-party services needs to be more clearly communicated to assess the impact on service levels and to detect and diagnose faults. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2100716485"
],
"abstract": [
"For a system of distributed processes, correctness can be ensured by (statically) checking whether their composition satisfies properties of interest. In contrast, Web services are being designed so that each partner discovers properties of others dynamically, through a published interface. Since the overall system may not be available statically and since each business process is supposed to be relatively simple, we propose to use runtime monitoring of conversations between partners as a means of checking behavioural correctness of the entire web service system. Specifically, we identify a subset of UML 2.0 Sequence Diagrams as a property specification language and show that it is sufficiently expressive for capturing safety and liveness properties. By transforming these diagrams to automata, we enable conformance checking of finite execution traces against the specification. We describe an implementation of our approach as part of an industrial system and report on preliminary experience."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | Besides monitoring individual service calls, it is important to predict the runtime performance of distributed systems. @cite_4 show that two techniques, benchmarking and simulation, have shortcomings if they are used separately and introduce and validate a complementary approach. Their approach presents a process which maps benchmark ontologies of simulations. This prove to be inexpensive, fast and reliable. Similarly, Lin et. al @cite_17 propose a novel way of root cause detection in microservice architectures utilizing causal graphs. In our interviews we found that performance is often only known when a system goes live, as the interdependencies between different services and their individual performance are not assessed beforehand. | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"2344277659",
"2021931360"
],
"abstract": [
"We consider the problem of identifying the source of failure in a network after receiving alarms or having observed symptoms. To locate the root cause accurately and timely in a large communication system is challenging because a single fault can often result in a large number of alarms, and multiple faults can occur concurrently. In this paper, we present a new fault localization method using a machine-learning approach. We propose to use logistic regression to study the correlation among network events based on end-to-end measurements. Then based on the regression model, we develop fault hypothesis that best explains the observed symptoms. Unlike previous work, the machine-learning algorithm requires neither the knowledge of dependencies among network events, nor the probabilities of faults, nor the conditional probabilities of fault propagation as input. The “low requirement” feature makes it suitable for large complex networks where accurate dependencies and prior probabilities are difficult to obtain. We then evaluate the performance of the learning algorithm with respect to the accuracy of fault hypothesis and the concentration property. Experimental results and theoretical analysis both show satisfactory performance.",
"In this work, we study information leakage in timing side channels that arise in the context of shared event schedulers. Consider two processes, one of them an innocuous process (referred to as Alice) and the other a malicious one (referred to as Bob), using a common scheduler to process their jobs. Based on when his jobs get processed, Bob wishes to learn about the pattern (size and timing) of jobs of Alice. Depending on the context, knowledge of this pattern could have serious implications on Alice's privacy and security. For instance, shared routers can reveal traffic patterns, shared memory access can reveal cloud usage patterns, and suchlike. We present a formal framework to study the information leakage in shared resource schedulers using the pattern estimation error as a performance metric. In this framework, a uniform upper bound is derived to benchmark different scheduling policies. The first-come-first-serve scheduling policy is analyzed, and shown to leak significant information when the scheduler is loaded heavily. To mitigate the timing information leakage, we propose an “Accumulate-and-Serve” policy which trades in privacy for a higher delay. The policy is analyzed under the proposed framework and is shown to leak minimum information to the attacker, and is shown to have comparatively lower delay than a fixed scheduler that preemptively assigns service times irrespective of traffic patterns."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | @cite_8 addresses runtime monitoring on continuous deployment in software development as a crucial task, especially in rapidly changing software solutions. While current runtime monitoring approaches of previous and newly deployed versions lack in capturing and monitoring differences at runtime, they present an approach which automatically discovers an execution behaviour model by mining execution logs. Approaches like this that gather information automatically instead of necessitating manual definition are crucial with growing complexity and dynamics of distributed systems. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2899695116"
],
"abstract": [
"Continuous deployment techniques support rapid deployment of new software versions. Usually a new version is deployed on a limited scale, its behavior is monitored and compared against the previously deployed version and either the deployment of the new version is broadened, or one reverts to the previous version. The existing monitoring approaches, however, do not capture the differences in the execution behavior between the new and the previously deployed versions. We propose an approach to automatically discover execution behavior models for the deployed and the new version using the execution logs. Differences between the two models are identified and enriched such that spurious differences, e.g., due to logging statement modifications, are mitigated. The remaining differences are visualized as cohesive diff regions within the discovered behavior model, allowing one to effectively analyze them for, e.g., anomaly detection and release decision making. To evaluate the proposed approach, we conducted case study on Nutch, an open source application, and an industrial application. We discovered the execution behavior models for the two versions of applications and identified the diff regions between them. By analyzing the regions, we detected bugs introduced in the new versions of these applications. The bugs have been reported and later fixed by the developers, thus, confirming the effectiveness of our approach."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | @cite_14 conducted a survey of 62 multinational companies on public cloud adoption. While use of public cloud infrastructure is on the rise, barriers like security, regulatory compliance, and monitoring remain. Regarding monitoring, the survey has shown that half of the companies rely solely on their cloud providers' monitoring dashboard. Participants noted a crucial need for quality of service monitoring integrated with their monitoring tool. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1978800709"
],
"abstract": [
"The group comparison reveals three potential adoption inhibitors, security, data privacy, and portability.Security concern results in an up to 26-fold increase in the non-adoption likelihood.Concerns about availability, integration, and migration complexity are not likely to hold back cloud adoption initiatives. In the context of cloud computing, risks associated with underlying technologies, risks involving service models and outsourcing, and enterprise readiness have been recognized as potential barriers for the adoption. To accelerate cloud adoption, the concrete barriers negatively influencing the adoption decision need to be identified. Our study aims at understanding the impact of technical and security-related barriers on the organizational decision to adopt the cloud. We analyzed data collected through a web survey of 352 individuals working for enterprises consisting of decision makers as well as employees from other levels within an organization. The comparison of adopter and non-adopter sample reveals three potential adoption inhibitor, security, data privacy, and portability. The result from our logistic regression analysis confirms the criticality of the security concern, which results in an up to 26-fold increase in the non-adoption likelihood. Our study underlines the importance of the technical and security perspectives for research investigating the adoption of technology."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | Similarly, Knoche and Hasselbring @cite_1 conducted a survey of German experts on microservice adoption. Drivers for microservice adoption are scalability, maintainability and development speed. On the other hand, barriers to adoption are mainly operational in nature. Operations department resist microservices due to the change in their tasks. On the technical level, running distributed applications prone to partial failures and monitoring them is a significant challenge. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2907869797"
],
"abstract": [
"Microservices are an architectural style for software which currently receives a lot of attention in both industry and academia. Several companies employ microservice architectures with great success, and there is a wealth of blog posts praising their advantages. Especially so-called Internet-scale systems use them to satisfy their enormous scalability requirements and to rapidly deliver new features to their users. But microservices are not only popular with large, Internet-scale systems. Many traditional companies are also considering whether microservices are a viable option for their applications. However, these companies may have other motivations to employ microservices, and see other barriers which could prevent them from adopting microservices. Furthermore, these drivers and barriers possibly differ among industry sectors. In this article, we present the results of a survey on drivers and barriers for microservice adoption among professionals in Germany. In addition to overall drivers and barriers, we particularly focus on the use of microservices to modernize existing software, with special emphasis on implications for runtime performance and transactionality. We observe interesting differences between early adopters who emphasize scalability of their Internet-scale systems, compared to traditional companies which emphasize maintainability of their IT systems."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | Gamez- @cite_9 performed an analysis of RestFUL APIs of cloud providers, identifying requirements for API governance and noting a lack of standardization. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2114850763"
],
"abstract": [
"Cloud infrastructure providers may form Cloud federations to cope with peaks in resource demand and to make large-scale service management simpler for service providers. To realize Cloud federations, a number of technical and managerial difficulties need to be solved. We present ongoing work addressing three related key management topics, namely, specification, scheduling, and monitoring of services. Service providers need to be able to influence how their resources are placed in Cloud federations, as federations may cross national borders or include companies in direct competition with the service provider. Based on related work in the RESERVOIR project, we propose a way to define service structure and placement restrictions using hierarchical directed acyclic graphs. We define a model for scheduling in Cloud federations that abides by the specified placement constraints and minimizes the risk of violating Service-Level Agreements. We present a heuristic that helps the model determine which virtual machines (VMs) are suitable candidates for migration. To aid the scheduler, and to provide unified data to service providers, we also propose a monitoring data distribution architecture that introduces cross-site compatibility by means of semantic metadata annotations."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | While not an empirical study, @cite_16 show monitoring challenges of holistic cloud applications. Scale and complexity of applications is identified as a main challenge. Related to observability, incomplete and inaccurate views of the total system as well as fault localization are other identified challenges. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2107557955"
],
"abstract": [
"Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | @cite_10 give an overview of the state-of-the-art in application performance monitoring (APM), describing typical capabilities and available APM software. They found APM to be a solution to monitoring and analyzing cloud environments, but note future challenges in root cause detection, setup effort and interoperability. APM cannot be understood as a purely technical topic anymore but needs to incorporate business and organizational aspects as well. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2606883211"
],
"abstract": [
"The performance of application systems has a direct impact on business metrics. For example, companies lose customers and revenue in case of poor performance such as high response times. Application performance management (APM) aims to provide the required processes and tools to have a continuous and up-to-date picture of relevant performance measures during operations, as well as to support the detection and resolution of performance-related incidents. In this tutorial paper, we provide an overview of the state of the art in APM in industrial practice and academic research, highlight current challenges, and outline future research directions."
]
} |
1907.12240 | 2966580058 | Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems. | @cite_3 give an insight into commercial cloud monitoring tools, showing state-of-the-art features, identifying shortcomings and, connected with that, future areas of research. Information aggregation across different layers of abstraction, a broad range of measurable metrics and extensibility are seen as critical success factors. Tools were found to be lacking in standardization regarding monitoring processes and metrics. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2107557955"
],
"abstract": [
"Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools."
]
} |
1907.12253 | 2965663253 | One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets. | Most of these approaches are applied to non-occluded objects with clean backgrounds and no occlusions, which may prevent their application to natural images. Sun al @cite_41 conduct experiments on real images from Pix3D, a large-scale dataset with aligned ground-truth 3D shapes, but do not consider the problem of occlusion. We are concerned with predicting shape of objects in natural scenes, which may be partly occluded. Our approach improves the state-of-the-art for object point set generation, and is extended to reconstruct beyond occlusion with the guidance of completed silhouettes. Our silhouettes guidance is closely related to the human depth estimation by Rematas al @cite_29 . However, Rematas al use the visible silhouette (semantic segmentation) rather than a complete silhouette, making it hard to predict overlapped (occluded) regions. Differently, our approach conditions on predicted silhouette to resolve occlusion ambiguity, and is able to predict complete 3D shape rather than 2.5D depth points. | {
"cite_N": [
"@cite_41",
"@cite_29"
],
"mid": [
"2055686029",
"2604236302"
],
"abstract": [
"We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.",
"We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously."
]
} |
1907.12253 | 2965663253 | One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets. | Occlusions have long been an obstacle in multi-view reconstruction. Solutions have been proposed to recover portions of surfaces from single views, with synthetic apertures @cite_6 @cite_31 , or to otherwise improve robustness of matching and completion functions from multiple views @cite_38 @cite_23 @cite_13 . Other work decompose a scene into layered depth maps from RGBD @cite_27 images or video @cite_9 and then seek to complete the occluded portions of the maps. But errors in layered segmentation can severely degrade the recovery of the occluded region. Learning-based approaches @cite_32 @cite_25 @cite_15 have posed recovery from occlusion as a 2D semantic segmentation completion task. Ehsani al @cite_10 propose to complete the silhouette and texture of an occluded object. Our silhouette completion network is most similar to Ehsani al, but we ease the task by predicting the complete silhouette rather than the full texture. We demonstrate better performance with our up-sampling based convolution decoder instead of fully connected layers used in Ehsani al. Moreover, We go further to try to predict the complete 3D shape of the occluded object. | {
"cite_N": [
"@cite_38",
"@cite_15",
"@cite_10",
"@cite_9",
"@cite_32",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_31",
"@cite_13",
"@cite_25"
],
"mid": [
"2055686029",
"244217497",
"2101092098",
"2462462929",
"2464328412",
"2400779783",
"832925222",
"1925320057",
"2949634581",
"2884531318",
"2894795260"
],
"abstract": [
"We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.",
"The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.",
"We present an approach for 3D reconstruction from multiple video streams taken by static, synchronized and calibrated cameras that is capable of enforcing temporal consistency on the reconstruction of successive frames. Our goal is to improve the quality of the reconstruction by finding corresponding pixels in subsequent frames of the same camera using optical flow, and also to at least maintain the quality of the single time-frame reconstruction when these correspondences are wrong or cannot be found. This allows us to process scenes with fast motion, occlusions and self- occlusions where optical flow fails for large numbers of pixels. To this end, we modify the belief propagation algorithm to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors. We also propose methods for introducing a bias and for suppressing noise typically observed in uniform regions. The bias encapsulates information about the background and aids in achieving a temporally consistent reconstruction and in the mitigation of occlusion related errors. We present results on publicly available real video sequences. We also present quantitative comparisons with results obtained by other researchers.",
"In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.",
"This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded invisible surfaces.",
"Enhanced particle filter tracker by latent occlusion flag to handle full occlusion.Handled persistent and or complex occlusions in RGBD sequences.Developed data-driven occlusion mask to evaluate various parts of observation.Fused multiple feature from color and depth domains to gain occlusion robustness. Although appearance-based trackers have been greatly improved in the last decade, they still struggle with challenges that are not fully resolved. Of these challenges, occlusions, which can be long lasting and of a wide variety, are often ignored or only partly addressed due to the difficulty in their treatments. To address this problem, in this study, we propose an occlusion-aware particle filter framework that employs a probabilistic model with a latent variable representing an occlusion flag. The proposed framework prevents losing the target by prediction of emerging occlusions, updates the target template by shifting relevant information, expands the search area for an occluded target, and grants quick recovery of the target after occlusion. Furthermore, the algorithm employs multiple features from the color and depth domains to achieve robustness against illumination changes and clutter, so that the probabilistic framework accommodates the fusion of those features. This method was applied to the Princeton RGBD Tracking Dataset, and the performance of our method with different sets of features was compared with those of the state-of-the-art trackers. The results revealed that our method outperformed the existing RGB and RGBD trackers by successfully dealing with different types of occlusions.",
"Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware @math -minimization algorithm by adaptively estimating the @math value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy.",
"Common techniques in structure from motion do not explicitly handle foreground occlusions and disocclusions, leading to several trajectories of a single 3D point. Hence, different discontinued trajectories induce a set of (more inaccurate) 3D points instead of a single 3D point, so that it is highly desirable to enforce long continuous trajectories which automatically bridge occlusions after a re-identification step. The solution proposed in this paper is to connect features in the current image to trajectories which discontinued earlier during the tracking. This is done using a correspondence analysis which is designed for wide baselines and an outlier elimination strategy using the epipolar geometry. The reference to the 3D object points can be used as a new constraint in the bundle adjustment. The feature localization is done using the SIFT detector extended by a Gaussian approximation of the gradient image signal. This technique provides the robustness of SIFT coupled with increased localization accuracy. Our results show that the reconstruction can be drastically improved and the drift is reduced, especially in sequences with occlusions resulting from foreground objects. In scenarios with large occlusions, the new approach leads to reliable and accurate results while a standard reference method fails.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"3D reconstruction from single view images is an ill-posed problem. Inferring the hidden regions from self-occluded images is both challenging and ambiguous. We propose a two-pronged approach to address these issues. To better incorporate the data prior and generate meaningful reconstructions, we propose 3D-LMNet, a latent embedding matching approach for 3D reconstruction. We first train a 3D point cloud auto-encoder and then learn a mapping from the 2D image to the corresponding learnt embedding. To tackle the issue of uncertainty in the reconstruction, we predict multiple reconstructions that are consistent with the input view. This is achieved by learning a probablistic latent space with a novel view-specific diversity loss. Thorough quantitative and qualitative analysis is performed to highlight the significance of the proposed approach. We outperform state-of-the-art approaches on the task of single-view 3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of our approach.",
"Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of per-view loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction."
]
} |
1907.12398 | 2965570822 | User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes. | The last couple of decades has seen a plethora of proposals for user authentication. In general, existing schemes suffer from at least one of the following drawbacks: (a) they require a dedicated device, (b) they are proprietary, (c) they involve a shared secret, and or (d) they still require a traditional password. Herein we only discuss a small subset of schemes and refer to the paper by @cite_9 for an extensive evaluation of related work. Using their framework we evaluated ZeroTwo and present the results in Table . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2030112111"
],
"abstract": [
"We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals."
]
} |
1907.12398 | 2965570822 | User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes. | Bonneau @cite_17 had previously proposed a password-based authentication protocol, designed to avoid revealing the password to the server (using Javascript), which requires neither a software update on the client side nor a separate authentication device. @cite_8 similarly focused on restrictions imposed by legacy systems to address the issues of weak passwords. | {
"cite_N": [
"@cite_8",
"@cite_17"
],
"mid": [
"1542316315",
"2058107309"
],
"abstract": [
"In this paper we address the problem of secure communication and authentication in ad-hoc wireless networks. This is a difficult problem, as it involves bootstrapping trust between strangers. We present a user-friendly solution, which provides secure authentication using almost any established public-key-based key exchange protocol, as well as inexpensive hash-based alternatives. In our approach, devices exchange a limited amount of public information over a privileged side channel, which will then allow them to complete an authenticated key exchange protocol over the wireless link. Our solution does not require a public key infrastructure, is secure against passive attacks on the privileged side channel and all attacks on the wireless link, and directly captures users’ intuitions that they want to talk to a particular previously unknown device in their physical proximity. We have implemented our system in Java for a variety of different devices, communication media, and key",
"Generally, if a user wants to use numerous different network services, he she must register himself herself to every service providing server. It is extremely hard for users to remember these different identities and passwords. In order to resolve this problem, various multi-server authentication protocols have been proposed. Recently, analyzed Hsiang and Shih's multi-server authentication protocol and proposed an improved dynamic identity based authentication protocol for multi-server architecture. They claimed that their protocol provides user's anonymity, mutual authentication, the session key agreement and can resist several kinds of attacks. However, through careful analysis, we find that 's protocol is still vulnerable to leak-of-verifier attack, stolen smart card attack and impersonation attack. Besides, since there is no way for the control server CS to know the real identity of the user, the authentication and session key agreement phase of 's protocol is incorrect. We propose an efficient and security dynamic identity based authentication protocol for multi-server architecture that removes the aforementioned weaknesses. The proposed protocol is extremely suitable for use in distributed multi-server architecture since it provides user's anonymity, mutual authentication, efficient, and security."
]
} |
1907.12398 | 2965570822 | User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes. | Sound-Proof @cite_10 is a recent system that relies on sound for the smartphone and browser to communicate. One of the main goals of Sound-Proof is to provide a seamless experience to users, i.e., the phone need not even be handled for the authentication process to complete. However, the user still has to type a password in the browser, which comes with the issues we discussed previously. Moreover, the complete seamlessness of Sound-Proof is not compatible with our view that certain actions should be explicitly authorized on a trusted device. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1955134710"
],
"abstract": [
"Two-factor authentication protects online accounts even if passwords are leaked. Most users, however, prefer password-only authentication. One reason why two-factor authentication is so unpopular is the extra steps that the user must complete in order to log in. Currently deployed two-factor authentication mechanisms require the user to interact with his phone to, for example, copy a verification code to the browser. Two-factor authentication schemes that eliminate user-phone interaction exist, but require additional software to be deployed. In this paper we propose Sound-Proof, a usable and deployable two-factor authentication mechanism. Sound-Proof does not require interaction between the user and his phone. In Sound-Proof the second authentication factor is the proximity of the user's phone to the device being used to log in. The proximity of the two devices is verified by comparing the ambient noise recorded by their microphones. Audio recording and comparison are transparent to the user, so that the user experience is similar to the one of password-only authentication. Sound-Proof can be easily deployed as it works with current phones and major browsers without plugins. We build a prototype for both Android and iOS. We provide empirical evidence that ambient noise is a robust discriminant to determine the proximity of two devices both indoors and outdoors, and even if the phone is in a pocket or purse. We conduct a user study designed to compare the perceived usability of Sound-Proof with Google 2-Step Verification. Participants ranked Sound-Proof as more usable and the majority would be willing to use Sound-Proof even for scenarios in which two-factor authentication is optional."
]
} |
1907.12353 | 2966108228 | We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available. | Cascade approaches have been involved in a variety of domains of computer vision, e.g., cascaded pose regression progressively refines a pose estimation learned from supervised training data @cite_32 , cascaded classifiers speed up the process of object detection @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_32"
],
"mid": [
"2147474239",
"2164598857"
],
"abstract": [
"We describe a cascaded method for object detection. This approach uses a novel organization of the first cascade stage called \"feature-centric\" evaluation which re-uses feature evaluations across multiple candidate windows. We minimize the cost of this evaluation through several simplifications: (1) localized lighting normalization, (2) representation of the classifier as an additive model and (3) discrete-valued features. Such a method also incorporates a unique feature representation. The early stages in the cascade use simple fast feature evaluations and the later stages use more complex discriminative features. In particular, we propose features based on sparse coding and ordinal relationships among filter responses. This combination of cascaded feature-centric evaluation with features of increasing complexity achieves both computational efficiency and accuracy. We describe object detection experiments on ten objects including faces and automobiles. These results include 97 recognition at equal error rate on the UIUC image database for car detection.",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection."
]
} |
1907.12353 | 2966108228 | We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available. | Deep learning also benefits from cascade architectures. For example, deep deformation network @cite_4 cascades two stages and predicts a deformation for landmark localization. Other applications include object detection @cite_9 , semantic segmentation @cite_47 , and image super-resolution @cite_51 . There are also several works specified to medical images, e.g., 3D image reconstruction for MRIs @cite_42 @cite_10 , liver segmentation @cite_44 and mitosis detection @cite_45 . Note that shallow, non-recursive network cascades are usually proposed in those works. | {
"cite_N": [
"@cite_4",
"@cite_10",
"@cite_9",
"@cite_42",
"@cite_44",
"@cite_45",
"@cite_47",
"@cite_51"
],
"mid": [
"2952074561",
"2788295695",
"2770205545",
"2518965973",
"2606181163",
"2559348937",
"2951001760",
"2495387757"
],
"abstract": [
"We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.",
"In this paper, we propose a novel approach for efficient training of deep neural networks in a bottom-up fashion using a layered structure. Our algorithm, which we refer to as deep cascade learning, is motivated by the cascade correlation approach of Fahlman and Lebiere, who introduced it in the context of perceptrons. We demonstrate our algorithm on networks of convolutional layers, though its applicability is more general. Such training of deep networks in a cascade directly circumvents the well-known vanishing gradient problem by ensuring that the output is always adjacent to the layer being trained. We present empirical evaluations comparing our deep cascade training with standard end–end training using back propagation of two convolutional neural network architectures on benchmark image classification tasks (CIFAR-10 and CIFAR-100). We then investigate the features learned by the approach and find that better, domain-specific, representations are learned in early layers when compared to what is learned in end–end training. This is partially attributable to the vanishing gradient problem that inhibits early layer filters to change significantly from their initial settings. While both networks perform similarly overall, recognition accuracy increases progressively with each added layer, with discriminative features learned in every stage of the network, whereas in end–end training, no such systematic feature representation was observed. We also show that such cascade training has significant computational and memory advantages over end–end training, and can be used as a pretraining algorithm to obtain a better performance.",
"Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually \"plausible\" and physically \"feasible\" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.",
"This paper is on human pose estimation using Convolutional Neural Networks. Our main contribution is a CNN cascaded architecture specifically designed for learning part relationships and spatial context, and robustly inferring pose even for the case of severe part occlusions. To this end, we propose a detection-followed-by-regression CNN cascade. The first part of our cascade outputs part detection heatmaps and the second part performs regression on these heatmaps. The benefits of the proposed architecture are multi-fold: It guides the network where to focus in the image and effectively encodes part constraints and context. More importantly, it can effectively cope with occlusions because part detection heatmaps for occluded parts provide low confidence scores which subsequently guide the regression part of our network to rely on contextual information in order to predict the location of these parts. Additionally, we show that the proposed cascade is flexible enough to readily allow the integration of various CNN architectures for both detection and regression, including recent ones based on residual learning. Finally, we illustrate that our cascade achieves top performance on the MPII and LSP data sets. Code can be downloaded from http: www.cs.nott.ac.uk psxab5 .",
"Abstract Facial landmark localization is a fundamental module for pose-invariant face recognition. The most common approach for facial landmark detection is cascaded regression, which is composed of two steps: feature extraction and facial shape regression. Recent methods employ deep convolutional networks to extract robust features for each step, while the whole system could be regarded as a deep cascaded regression architecture. In this work, instead of employing a deep regression network, a Globally Optimized Dual-Pathway (GoDP) deep architecture is proposed to identify the target pixels through solving a cascaded pixel labeling problem without resorting to high-level inference models or complex stacked architecture. The proposed end-to-end system relies on distance-aware soft-max functions and dual-pathway proposal-refinement architecture. Results show that it outperforms the state-of-the-art cascaded regression-based methods on multiple in-the-wild face alignment databases. The model achieves 1.84 normalized mean error (NME) on the AFLW database [1], which outperforms 3DDFA [2] by 61.8 . Experiments on face identification demonstrate that GoDP, coupled with DPM-headhunter [3], is able to improve rank-1 identification rate by 44.2 compare to Dlib [4] toolbox on a challenging database.",
"Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.",
"Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.",
"Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image."
]
} |
1907.12353 | 2966108228 | We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available. | In respect of registration, traditional algorithms iteratively optimize some energy functions in common @cite_39 @cite_53 @cite_23 @cite_2 @cite_33 @cite_13 @cite_54 @cite_56 . Those methods are also recursive in general, i.e., similarly functioned alignments with respect to the current warped images are performed during iterations. Iterative Closest Point is an iterative, recursive approach for registering point clouds @cite_18 @cite_31 , where the closest pairs of points are matched at each iteration and a rigid transformation that minimizes the difference is solved. In deformable image registration, most traditional algorithms basically works like this but in a much more complex way. Standard symmetric normalization (SyN) @cite_23 maximizes the cross-correlation within the space of diffeomorphic maps during iterations. Optimizing free-form deformations using B-spline @cite_17 is another standard approach. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_53",
"@cite_54",
"@cite_39",
"@cite_56",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_13",
"@cite_17"
],
"mid": [
"2109323956",
"2064358676",
"2610614679",
"2004312117",
"2898182017",
"2049981393",
"2118961645",
"2132512702",
"2089254731",
"2133774049",
"2074208271"
],
"abstract": [
"We propose a novel algorithm to register multiple 3D point sets within a common reference frame using a manifold optimization approach. The point sets are obtained with multiple laser scanners or a mobile scanner. Unlike most prior algorithms, our approach performs an explicit optimization on the manifold of rotations, allowing us to formulate the registration problem as an unconstrained minimization on a constrained manifold. This approach exploits the Lie group structure of SO 3 and the simple representation of its associated Lie algebra so 3 in terms of R3. Our contributions are threefold. We present a new analytic method based on singular value decompositions that yields a closed-form solution for simultaneous multiview registration in the noise-free scenario. Secondly, we use this method to derive a good initial estimate of a solution in the noise-free case. This initialization step may be of use in any general iterative scheme. Finally, we present an iterative scheme based on Newton's method on SO 3 that has locally quadratic convergence. We demonstrate the efficacy of our scheme on scan data taken both from the Digital Michelangelo project and from scans extracted from models, and compare it to some of the other well known schemes for multiview registration. In all cases, our algorithm converges much faster than the other approaches, (in some cases orders of magnitude faster), and generates consistently higher quality registrations.",
"In this paper, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets.",
"In this paper, we present a robust global approach for point cloud registration from uniformly sampled points. Based on eigenvalues and normals computed from multiple scales, we design fast descriptors to extract local structures of these points. The eigenvalue-based descriptor is effective at finding seed matches with low precision using nearest neighbor search. Generally, recovering the transformation from matches with low precision is rather challenging. Therefore, we introduce a mechanism named correspondence propagation to aggregate each seed match into a set of numerous matches. With these sets of matches, multiple transformations between point clouds are computed. A quality function formulated from distance errors is used to identify the best transformation and fulfill a coarse alignment of the point clouds. Finally, we refine the alignment result with the trimmed iterative closest point algorithm. The proposed approach can be applied to register point clouds with significant or limited overlaps and small or large transformations. More encouragingly, it is rather efficient and very robust to noise. A comparison to traditional descriptor-based methods and other global algorithms demonstrates the fine performance of the proposed approach. We also show its promising application in large-scale reconstruction with the scans of two real scenes. In addition, the proposed approach can be used to register low-resolution point clouds captured by Kinect as well.",
"Abstract This paper introduces a new method of registering point sets. The registration error is directly minimized using general-purpose non-linear optimization (the Levenberg–Marquardt algorithm). The surprising conclusion of the paper is that this technique is comparable in speed to the special-purpose Iterated Closest Point algorithm, which is most commonly used for this task. Because the routine directly minimizes an energy function, it is easy to extend it to incorporate robust estimation via a Huber kernel, yielding a basin of convergence that is many times wider than existing techniques. Finally, we introduce a data structure for the minimization based on the chamfer distance transform, which yields an algorithm that is both faster and more robust than previously described methods.",
"This paper reviews the classical problem of free-form curve registration and applies it to an efficient RGB-D visual odometry system called Canny-VO, as it efficiently tracks all Canny edge features extracted from the images. Two replacements for the distance transformation commonly used in edge registration are proposed: approximate nearest neighbor fields and oriented nearest neighbor fields. 3-D–2-D edge alignment benefits from these alternative formulations in terms of both efficiency and accuracy. It removes the need for the more computationally demanding paradigms of data-to-model registration, bilinear interpolation, and subgradient computation. To ensure robustness of the system in the presence of outliers and sensor noise, the registration is formulated as a maximum a posteriori problem and the resulting weighted least-squares objective is solved by the iteratively reweighted least-squares method. A variety of robust weight functions are investigated and the optimal choice is made based on the statistics of the residual errors. Efficiency is furthermore boosted by an adaptively sampled definition of the nearest neighbor fields. Extensive evaluations on public SLAM benchmark sequences demonstrate state-of-the-art performance and an advantage over classical Euclidean distance fields.",
"The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >",
"In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.",
"We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.",
"Registration is a prerequisite for fusion of geometrically distorted images. Traditionally, intensity-based image registration methods are preferred to feature-based ones due to higher accuracy of the former than that of the latter. To reduce computational load, image registration is often carried out using the approximate-level coefficients of a wavelet-like transform. Directional selectivity of the transform and the objective function used for the coefficients play vital roles in the alignment process of images. This paper introduces an image registration algorithm that uses the approximate-level coefficients of the curvelet transform, directional selectivity of which is better than many wavelet-like transforms. A conditional entropy-based objective function is developed for registration using a suitable probabilistic model of the curvelet coefficients of images. Suitability of the probability distribution of the coefficients is validated using a standard method to assess goodness of fit. To align the distorted images, the affine transformation that possesses parameters related to the translation, rotation, scaling, and shearing is used. Extensive experimentations are carried out to test the performance of the proposed registration method considering that the images are synthetically or naturally distorted. Experimental results show that performance of the proposed registration method is superior to existing methods in terms of commonly used performance metrics.",
"This review introduces a novel deformable image registration paradigm that exploits Markov random field formulation and powerful discrete optimization algorithms. We express deformable registration as a minimal cost graph problem, where nodes correspond to the deformation grid, a node's connectivity corresponds to regularization constraints, and labels correspond to 3D deformations. To cope with both iconic and geometric (landmark-based) registration, we introduce two graphical models, one for each subproblem. The two graphs share interconnected variables, leading to a modular, powerful, and flexible formulation that can account for arbitrary image-matching criteria, various local deformation models, and regularization constraints. To cope with the corresponding optimization problem, we adopt two optimization strategies: a computationally efficient one and a tight relaxation alternative. Promising results demonstrate the potential of this approach. Discrete methods are an important new trend in medical im...",
"We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993 in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535 success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993 success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery."
]
} |
1907.12353 | 2966108228 | We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available. | Learning-based methods are presented recently. Supervised methods entail much effort on the labeled data that can hardly meet the realistic demands, resulting in the limited performance @cite_55 @cite_15 @cite_38 @cite_57 . Unsupervised methods are proposed to solve this problem. Several initial works shows the possibility of unsupervised learning @cite_35 @cite_43 @cite_1 @cite_49 , among which DLIR @cite_43 performs on par with the B-spline method implemented in SimpleElastix @cite_29 (a multi-language extension of Elastix @cite_0 , which is selected as one of our baseline methods). VoxelMorph @cite_3 and VTN @cite_16 achieve better performance by predicting a dense flow field using deconvolutional layers @cite_21 , whereas DLIR only predicts a sparse displacement grid interpolated by a third order B-spline kernel. VoxelMorph only evaluates their method on brain MRI datasets @cite_3 @cite_41 , but shown deficiency on other datasets such as liver CT scans by later work @cite_16 . Additionally, VTN proposes an initial convolutional network which performs an affine transformation before predicting deformation fields, leading to a truly end-to-end framework by substituting the traditional affine stage. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_41",
"@cite_55",
"@cite_29",
"@cite_1",
"@cite_21",
"@cite_3",
"@cite_57",
"@cite_43",
"@cite_0",
"@cite_49",
"@cite_15",
"@cite_16"
],
"mid": [
"2785325870",
"2962742544",
"2153378748",
"2891179298",
"2742126485",
"1422227952",
"2553156677",
"2798785261",
"1930723014",
"2284198383",
"2729876886",
"2770205545",
"2774918944",
"2248723555"
],
"abstract": [
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.",
"Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these \"Deep And Wide Multiscale Recursive\" (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ( @math ) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks.",
"We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets.",
"A significant proportion of patients scanned in a clinical setting have follow-up scans. We show in this work that such longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network. We demonstrate this self-supervised learning for the case of T2-weighted sagittal lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network (CNN) is trained using two losses: (i) a contrastive loss on whether the scan is of the same person (i.e. longitudinal) or not, together with (ii) a classification loss on predicting the level of vertebral bodies. The performance of this pre-trained network is then assessed on a grading classification task. We experiment on a dataset of 1016 subjects, 423 possessing follow-up scans, with the end goal of learning the disc degeneration radiological gradings attached to the intervertebral discs. We show that the performance of the pre-trained CNN on the supervised classification task is (i) superior to that of a network trained from scratch; and (ii) requires far fewer annotated training samples to reach an equivalent performance to that of the network trained from scratch.",
"Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.",
"Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.",
"Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.",
"Dense semantic 3D reconstruction is typically formulated as a discrete or continuous problem over label assignments in a voxel grid, combining semantic and depth likelihoods in a Markov Random Field framework. The depth and semantic information is incorporated as a unary potential, smoothed by a pairwise regularizer. However, modelling likelihoods as a unary potential does not model the problem correctly leading to various undesirable visibility artifacts. We propose to formulate an optimization problem that directly optimizes the reprojection error of the 3D model with respect to the image estimates, which corresponds to the optimization over rays, where the cost function depends on the semantic class and depth of the first occupied voxel along the ray. The 2-label formulation is made feasible by transforming it into a graph-representable form under QPBO relaxation, solvable using graph cut. The multi-label problem is solved by applying α-expansion using the same relaxation in each expansion move. Our method was indeed shown to be feasible in practice, running comparably fast to the competing methods, while not suffering from ray potential approximation artifacts.",
"Abstract Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N = 53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N = 135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.",
"Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73 and 97.62 , respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.",
"Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually \"plausible\" and physically \"feasible\" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.",
"Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.",
"Deep learning methods such as convolutional neural networks (CNNs) can deliver highly accurate classification results when provided with large enough data sets and respective labels. However, using CNNs along with limited labeled data can be problematic, as this leads to extensive overfitting. In this letter, we propose a novel method by considering a pretrained CNN designed for tackling an entirely different classification problem, namely, the ImageNet challenge, and exploit it to extract an initial set of representations. The derived representations are then transferred into a supervised CNN classifier, along with their class labels, effectively training the system. Through this two-stage framework, we successfully deal with the limited-data problem in an end-to-end processing scheme. Comparative results over the UC Merced Land Use benchmark prove that our method significantly outperforms the previously best stated results, improving the overall accuracy from 83.1 up to 92.4 . Apart from statistical improvements, our method introduces a novel feature fusion algorithm that effectively tackles the large data dimensionality by using a simple and computationally efficient approach."
]
} |
1907.12212 | 2965633139 | This paper is concerned with voting processes on graphs where each vertex holds one of two different opinions. In particular, we study the and the . Here at each synchronous and discrete time step, each vertex updates its opinion to match the majority among the opinions of two random neighbors and itself (the Best-of-two) or the opinions of three random neighbors (the Best-of-three). Previous studies have explored these processes on complete graphs and expander graphs, but we understand significantly less about their properties on graphs with more complicated structures. In this paper, we study the Best-of-two and the Best-of-three on the stochastic block model @math , which is a random graph consisting of two distinct Erdős-Renyi graphs @math joined by random edges with density @math . We obtain two main results. First, if @math and @math is a constant, we show that there is a phase transition in @math with threshold @math (specifically, @math for the Best-of-two, and @math for the Best-of-three). If @math , the process reaches consensus within @math steps for any initial opinion configuration with a bias of @math . By contrast, if @math , we show that, for any initial opinion configuration, the process reaches consensus within @math steps. To the best of our knowledge, this is the first result concerning multiple-choice voting for arbitrary initial opinion configurations on non-complete graphs. | Other studies have focused on voting processes with more general updating rules. Cooper and Rivera @cite_5 studied the linear voting model , whose updating rule is characterized by a set of @math binary matrices. This model covers the synchronous pull and the asynchronous push pull voting processes. However, it does not cover the Best-of-two and the Best-of-three. Schoenebeck and Yu @cite_0 studied asynchronous voting processes whose updating functions are majority-like (including the asynchronous Best-of- @math voting processes). They gave upper bounds on the consensus times of such models on dense Erd o s-R 'enyi random graphs using a potential technique. | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2533034710",
"1494224459"
],
"abstract": [
"We study voting models on graphs. In the beginning, the vertices of a given graph have some initial opinion. Over time, the opinions on the vertices change by interactions between graph neighbours. Under suitable conditions the system evolves to a state in which all vertices have the same opinion. In this work, we consider a new model of voting, called the Linear Voting Model. This model can be seen as a generalization of several models of voting, including among others, pull voting and push voting. One advantage of our model is that, even though it is very general, it has a rich structure making the analysis tractable. In particular we are able to solve the basic question about voting, the probability that certain opinion wins the poll, and furthermore, given appropriate conditions, we are able to bound the expected time until some opinion wins.",
"We introduce and study the reverse voter model, a dynamics for spin variables similar to the well‐known voter dynamics. The difference is in the way neighbors influence each other: once a node is selected and one among its neighbors chosen, the neighbor is made equal to the selected node, while in the usual voter dynamics the update goes in the opposite direction. The reverse voter dynamics is studied analytically, showing that on networks with degree distribution decaying as k−v, the time to reach consensus is linear in the system size N for all v > 2. The consensus time for link‐update voter dynamics is computed as well. We verify the results numerically on a class of uncorrelated scale‐free graphs."
]
} |
1907.12079 | 2964825438 | Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results. | Various information visualization techniques have been applied to improve user interfaces for search. Some systems augment search result lists with additional small visualizations. For example, TileBars @cite_26 , INSYDER @cite_5 , and HotMap @cite_25 visualize query-document relationships as icons or glyphs alongside search results. Another approach is to visualize search results in a spatial layout where proximity represents similarity. Systems such as InfoSky @cite_20 and IN-SPIRE @cite_19 are examples. FacetAtlas @cite_3 overlays additional heatmaps to visualize density. ProjSnippet @cite_34 visualizes text snippets in a 2-D layout. Many others cluster the search results and offer faceted navigation. FacetMap @cite_4 and ResultMap @cite_27 utilizes treemap-style visualizations to represent facets. These systems may guide users well in exploring search results, but they are mostly based on static search queries. Our system goes beyond search results exploration and offers interactive target (query) building. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_34",
"@cite_25",
"@cite_20"
],
"mid": [
"137863291",
"2663075978",
"2158943277",
"2105397135",
"2766201361",
"2027855569",
"1990402145",
"1493108551",
"2526145926"
],
"abstract": [
"Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.",
"In this paper, we present the results of a user study on exploratory search activities in a social science digital library. We conducted a user study with 32 participants with a social sciences background—16 postdoctoral researchers and 16 students—who were asked to solve a task on searching related work to a given topic. The exploratory search task was performed in a 10-min time slot. The use of certain search activities is measured and compared to gaze data recorded with an eye tracking device. We use a novel tree graph representation to visualise the users’ search patterns and introduce a way to combine multiple search session trees. The tree graph representation is capable of creating one single tree for multiple users and identifying common search patterns. In addition, the information behaviour of students and postdoctoral researchers is being compared. The results show that search activities on the stratagem level are frequently utilised by both user groups. The most heavily used search activities were keyword search, followed by browsing through references and citations, and author searching. The eye tracking results showed an intense examination of documents metadata, especially on the level of citations and references. When comparing the group of students and postdoctoral researchers, we found significant differences regarding gaze data on the area of the journal name of the seed document. In general, we found a tendency of the postdoctoral researchers to examine the metadata records more intensively with regard to dwell time and the number of fixations. By creating combined session trees and deriving subtrees from those, we were able to identify common patterns like economic (explorative) and exhaustive (navigational) behaviour. Our results show that participants utilised multiple search strategies starting from the seed document, which means that they examined different paths to find related publications.",
"In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.",
"In this paper, we have developed a novel framework called JustClick to enable personalized image recommendation via exploratory search from large-scale collections of Flickr images. First, a topic network is automatically generated to summarize large-scale collections of Flickr images at a semantic level. Hyperbolic visualization is further used to enable interactive navigation and exploration of the topic network, so that users can gain insights of large-scale image collections at the first glance, build up their mental query models interactively and specify their queries (i.e., image needs) more precisely by selecting the image topics on the topic network directly. Thus, our personalized query recommendation framework can effectively address both the problem of query formulation and the problem of vocabulary discrepancy and null returns. Second, a small set of most representative images are recommended for the given image topic according to their representativeness scores. Kernel principal component analysis and hyperbolic visualization are seamlessly integrated to organize and layout the recommended images (i.e., most representative images) according to their nonlinear visual similarity contexts, so that users can assess the relevance between the recommended images and their real query intentions interactively. An interactive interface is implemented to allow users to express their time-varying query intentions precisely and to direct our JustClick system to more relevant images according to their personal preferences. Our experiments on large-scale collections of Flickr images show very positive results.",
"This work proposes a process for efficiently searching over combinations of individual object 6D pose hypotheses in cluttered scenes, especially in cases involving occlusions and objects resting on each other. The initial set of candidate object poses is generated from state-of-the-art object detection and global point cloud registration techniques. The best-scored pose per object by using these techniques may not be accurate due to overlaps and occlusions. Nevertheless, experimental indications provided in this work show that object poses with lower ranks may be closer to the real poses than ones with high ranks according to registration techniques. This motivates a global optimization process for improving these poses by taking into account scene-level physical interactions between objects. It also implies that the Cartesian product of candidate poses for interacting objects must be searched so as to identify the best scene-level hypothesis. To perform the search efficiently, the candidate poses for each object are clustered so as to reduce their number but still keep a sufficient diversity. Then, searching over the combinations of candidate object poses is performed through a Monte Carlo Tree Search (MCTS) process that uses the similarity between the observed depth image of the scene and a rendering of the scene given the hypothesized pose as a score that guides the search procedure. MCTS handles in a principled way the tradeoff between fine-tuning the most promising poses and exploring new ones, by using the Upper Confidence Bound (UCB) technique. Experimental results indicate that this process is able to quickly identify in cluttered scenes physically-consistent object poses that are significantly closer to ground truth compared to poses found by point cloud registration methods.",
"For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview , an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system “in the wild”, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of “exploring” a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview 's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.",
"We propose a method to highlight query hits in hierarchically clustered collections of interrelated items such as digital libraries or knowledge bases. The method is based on the idea that organizing search results similarly to their arrangement on a fixed reference map facilitates orientation and assessment by preserving a user's mental map. Here, the reference map is built from an MDS layout of the items in a Voronoi treemap representing their hierarchical clustering, and we use techniques from dynamic graph layout to align query results with the map. The approach is illustrated on an archive of newspaper articles.",
"This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection.",
"Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art."
]
} |
1907.12079 | 2964825438 | Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results. | Although topic summarization has been studied for a long time, discovering topic summary of a specific aspect (or targets) is a relatively new research problem. TTM @cite_12 is the first work to propose the term targeted topic modeling'. This work proposes a probabilistic model that is a variation of latent Dirichlet allocation (LDA) @cite_10 . Given a static keyword list defining a particular aspect, the model identifies topic keywords related to this aspect. @cite_23 identifies a list of target words from review data and disentangles aspect words and opinion words from the list. APSUM @cite_24 assigns aspects to each word in a generative process. Since the aforementioned model generates topic keywords based on a static keyword list, a dynamic model is desired. An automatic method to generate keyword dynamically has been proposed @cite_41 . This method focuses on the on-line environment of Twitter and automatically generates keywords based on the time-evolving word graph. | {
"cite_N": [
"@cite_41",
"@cite_24",
"@cite_23",
"@cite_10",
"@cite_12"
],
"mid": [
"2251734881",
"1714665356",
"2104483432",
"2611254175",
"2169606435"
],
"abstract": [
"Topic Model such as Latent Dirichlet Allocation(LDA) makes assumption that topic assignment of different words are conditionally independent. In this paper, we propose a new model Extended Global Topic Random Field (EGTRF) to model non-linear dependencies between words. Specifically, we parse sentences into dependency trees and represent them as a graph, and assume the topic assignment of a word is influenced by its adjacent words and distance-2 words. Word similarity information learned from large corpus is incorporated to enhance word topic assignment. Parameters are estimated efficiently by variational inference and experimental results on two datasets show EGTRF achieves lower perplexity and higher log predictive probability.",
"Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"Statistical approaches to automatic text summarization based on term frequency continue to perform on par with more complex summarization methods. To compute useful frequency statistics, however, the semantically important words must be separated from the low-content function words. The standard approach of using an a priori stopword list tends to result in both undercoverage, where syntactical words are seen as semantically relevant, and overcoverage, where words related to content are ignored. We present a generative probabilistic modeling approach to building content distributions for use with statistical multi-document summarization where the syntax words are learned directly from the data with a Hidden Markov Model and are thereby deemphasized in the term frequency statistics. This approach is compared to both a stopword-list and POS-tagging approach and our method demonstrates improved coverage on the DUC 2006 and TAC 2010 datasets using the ROUGE metric.",
"Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28 (absolute) in ROUGE-L scores.",
"Most topic models, such as latent Dirichlet allocation, rely on the bag-of-words assumption. However, word order and phrases are often critical to capturing the meaning of text in many text mining tasks. This paper presents topical n-grams, a topic model that discovers topics as well as topical phrases. The probabilistic model generates words in their textual order by, for each word, first sampling a topic, then sampling its status as a unigram or bigram, and then sampling the word from a topic-specific unigram or bigram distribution. Thus our model can model \"white house\" as a special meaning phrase in the 'politics' topic, but not in the 'real estate' topic. Successive bigrams form longer phrases. We present experiments showing meaningful phrases and more interpretable topics from the NIPS data and improved information retrieval performance on a TREC collection."
]
} |
1907.12079 | 2964825438 | Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results. | Interactive topic models allow users to steer the topics to improve the topic modeling results. Various topic steering interactions such as adding, editing, deleting, splitting, and merging topics have been introduced @cite_32 @cite_18 @cite_16 @cite_42 @cite_11 @cite_28 @cite_37 @cite_33 @cite_15 . These interactions can be applied to refine relevant topics and remove irrelevant topics to identify targeted topics when most of the data items are relevant and only a small portion is irrelevant. However, in our large-scale search space reduction setting, a more tailored approach is needed. In this paper, we propose interactive targeted topic modeling to steer the topics to discover the target-relevant topics and documents. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_33",
"@cite_28",
"@cite_42",
"@cite_32",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2171319841",
"2106035193",
"2169681319",
"1995866178",
"2087382273",
"2109154616",
"2110642088",
"2150731624",
"1981467090"
],
"abstract": [
"Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.",
"Topic modeling has been commonly used to discover topics from document collections. However, unsupervised models can generate many incoherent topics. To address this problem, several knowledge-based topic models have been proposed to incorporate prior domain knowledge from the user. This work advances this research much further and shows that without any user input, we can mine the prior knowledge automatically and dynamically from topics already found from a large number of domains. This paper first proposes a novel method to mine such prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference. What is also interesting is that this approach offers a novel lifelong learning algorithm for topic discovery, which exploits the big (past) data and knowledge gained from such data for subsequent modeling. Our experimental results using product reviews from 50 domains demonstrate the effectiveness of the proposed approach.",
"Topic modeling provides a powerful way to analyze the content of a collection of documents. It has become a popular tool in many research areas, such as text mining, information retrieval, natural language processing, and other related fields. In real-world applications, however, the usefulness of topic modeling is limited due to scalability issues. Scaling to larger document collections via parallelization is an active area of research, but most solutions require drastic steps, such as vastly reducing input vocabulary. In this article we introduce Regularized Latent Semantic Indexing (RLSI)---including a batch version and an online version, referred to as batch RLSI and online RLSI, respectively---to scale up topic modeling. Batch RLSI and online RLSI are as effective as existing topic modeling techniques and can scale to larger datasets without reducing input vocabulary. Moreover, online RLSI can be applied to stream data and can capture the dynamic evolution of topics. Both versions of RLSI formalize topic modeling as a problem of minimizing a quadratic loss function regularized by e1 and or e2 norm. This formulation allows the learning process to be decomposed into multiple suboptimization problems which can be optimized in parallel, for example, via MapReduce. We particularly propose adopting e1 norm on topics and e2 norm on document representations to create a model with compact and readable topics and which is useful for retrieval. In learning, batch RLSI processes all the documents in the collection as a whole, while online RLSI processes the documents in the collection one by one. We also prove the convergence of the learning of online RLSI. Relevance ranking experiments on three TREC datasets show that batch RLSI and online RLSI perform better than LSI, PLSI, LDA, and NMF, and the improvements are sometimes statistically significant. Experiments on a Web dataset containing about 1.6 million documents and 7 million terms, demonstrate a similar boost in performance.",
"Topic modeling has been widely used to mine topics from documents. However, a key weakness of topic modeling is that it needs a large amount of data (e.g., thousands of documents) to provide reliable statistics to generate coherent topics. However, in practice, many document collections do not have so many documents. Given a small number of documents, the classic topic model LDA generates very poor topics. Even with a large volume of data, unsupervised learning of topic models can still produce unsatisfactory results. In recently years, knowledge-based topic models have been proposed, which ask human users to provide some prior domain knowledge to guide the model to produce better topics. Our research takes a radically different approach. We propose to learn as humans do, i.e., retaining the results learned in the past and using them to help future learning. When faced with a new task, we first mine some reliable (prior) knowledge from the past learning modeling results and then use it to guide the model inference to generate more coherent topics. This approach is possible because of the big data readily available on the Web. The proposed algorithm mines two forms of knowledge: must-link (meaning that two words should be in the same topic) and cannot-link (meaning that two words should not be in the same topic). It also deals with two problems of the automatically mined knowledge, i.e., wrong knowledge and knowledge transitivity. Experimental results using review documents from 100 product domains show that the proposed approach makes dramatic improvements over state-of-the-art baselines.",
"Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis VAST paper data set and product review data sets.",
"We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.",
"This paper presents online topic model (OLDA), a topic model that automatically captures the thematic patterns and identifies emerging topics of text streams and their changes over time. Our approach allows the topic modeling framework, specifically the latent Dirichlet allocation (LDA) model, to work in an online fashion such that it incrementally builds an up-to-date model (mixture of topics per document and mixture of words per topic) when a new document (or a set of documents) appears. A solution based on the empirical Bayes method is proposed. The idea is to incrementally update the current model according to the information inferred from the new stream of data with no need to access previous data. The dynamics of the proposed approach also provide an efficient mean to track the topics over time and detect the emerging topics in real time. Our method is evaluated both qualitatively and quantitatively using benchmark datasets. In our experiments, the OLDA has discovered interesting patterns by just analyzing a fraction of data at a time. Our tests also prove the ability of OLDA to align the topics across the epochs with which the evolution of the topics over time is captured. The OLDA is also comparable to, and sometimes better than, the original LDA in predicting the likelihood of unseen documents.",
"Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.",
"Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes."
]
} |
1907.12047 | 2956646654 | Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance. | Our work relates to several areas of research in student modeling. Several approaches within the educational data mining community have used computational methods for sequencing students' learning items. Pardos and Heffernan @cite_39 infer order over questions by predicting students' skill levels over action pairs using Bayesian knowledge tracing. They show the efficacy of this approach on a test-set comprising random sequences of three questions as well as simulated data. This approach explicitly considers each possible order sequence and does not scale to handling a large number of sequences, as in the student ranking problem we consider in this paper. | {
"cite_N": [
"@cite_39"
],
"mid": [
"2129373702"
],
"abstract": [
"Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items produce the most learning we propose a Bayesian method using similar permutation analysis techniques to determine if item learning is context sensitive and if so which orderings of questions produce the most learning. We confine our analysis to random sequences with three questions. The method identifies question ordering rules such as, question A should go before B, which are statistically reliably beneficial to learning. Real tutor data from five random sequence problem sets were analyzed. Statistically reliable orderings of questions were found in two of the five real data problem sets. A simulation consisting of 140 experiments was run to validate the method's accuracy and test its reliability. The method succeeded in finding 43 of the underlying item order effects with a 6 false positive rate using a p value threshold of <= 0.05. Using this method, ITS researchers can gain valuable knowledge about their problem sets and feasibly let the ITS automatically identify item order effects and optimize student learning by restricting assigned sequences to those prescribed as most beneficial to learning."
]
} |
1907.12047 | 2956646654 | Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance. | Multiple researchers have used Bayesian knowledge tracing as a way to infer students' skill acquisition (i.e., mastery level) over time given their performance levels on different question sequences @cite_25 . These researchers reason about students' prior knowledge of skills and also account for slips and guessing on test problems. The models are trained on large data sets from multiple students using machine learning algorithms that account for latent variables @cite_18 @cite_17 . We solve a different problem --- using other students' performance to personalize ranking over test-questions. In addition, these methods measure students' performance dichotomously (i.e., success or failure) whereas we reason about additional features such as students' grade and number of attempts to solve the question. We intend to infer students' skill levels to improve the ranking prediction in future work. | {
"cite_N": [
"@cite_18",
"@cite_25",
"@cite_17"
],
"mid": [
"1737233464",
"1597703949",
"160448112"
],
"abstract": [
"Bayesian knowledge tracing has been used widely to model student learning. However, the name “Bayesian knowledge tracing” has been applied to two related, but distinct, models: The first is the Bayesian knowledge tracing Markov chain which predicts the student-averaged probability of a correct application of a skill. We present an analytical solution to this model and show that it is a function of three parameters and has the functional form of an exponential. The second form is the Bayesian knowledge tracing hidden Markov model which can use the individual student's performance at each opportunity to apply a skill to update the conditional probability that the student has learned that skill. We use a fixed point analysis to study solutions of this model and find a range of parameters where it has the desired behavior. *Revised version, Feb 2017",
"Modeling students' knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students' knowledge is Corbett and Anderson's [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem:the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model's conceptual meaning (such as a student being more likely to get a correct answer if he she does not know a skill than if he she does).We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck's solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.",
"Student modeling is an important component of ITS research because it can help guide the behavior of a running tutor and help researchers understand how students learn. Due to its predictive accuracy, interpretability and ability to infer student knowledge, Corbett & Anderson’s Bayesian Knowledge Tracing is one of the most popular student models. However, researchers have discovered problems with some of the most popular methods of fitting it. These problems include: multiple sets of highly dissimilar parameters predicting the data equally well (identifiability), local minima, degenerate parameters, and computational cost during fitting. Some researchers have proposed new fitting procedures to combat these problems, but are more complex and not completely successful at eliminating the problems they set out to prevent. We instead fit parameters by estimating the mostly likely point that each student learned the skill, developing a new method that avoids the above problems while achieving similar predictive accuracy."
]
} |
1907.12047 | 2956646654 | Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance. | Approaches based on recommendation systems are increasingly being used in e-learning to predict students' scores and to personalize educational content. We mention a few examples below and refer the reader to the surveys by @cite_30 and @cite_37 for more details. Collaborative filtering (CF) was previously used in the educational domain for predicting students' performance. Toscher and Jahrer @cite_26 use an ensemble of CF algorithms to predict performance for items in the KDD 2010 educational challenge. Berger et. al @cite_3 use a model-based approach for predicting accuracy levels of students' performance and skill levels on real and simulated data sets. They also formalize a relationship between CF and Item Response Theory methods and demonstrate this relationship empirically. @cite_28 use matrix factorization for task sequencing in a large commercial Intelligent Tutoring System, showing improved adaptivity compared to a baseline sequencer. Finally, Loll and Pinkwart @cite_1 use CF as a diagnostic tool for knowledge test questions as well as more exploratory ill-defined tasks. None of these approaches ranked questions according the personal difficulty level of questions to specific students. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_28",
"@cite_1",
"@cite_3"
],
"mid": [
"1569739730",
"1994576156",
"2116413942",
"2038585576",
"2950316093",
"2157881433"
],
"abstract": [
"We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a widely used subset of Item Response Theory (IRT) models which obtain as specific instances of the CF: the one-parameter logistic (Rasch) model, Birnbaum’s 2PL model, and Reckase’s multidimensional generalization M2PL. We find that IRT models perform well relative to generalized alternatives, and thus this method offers a fast and stable alternate approach to IRT parameter estimation. Using both real and simulated data we examine cases where oneor two-dimensional IRT models prevail and are not improved by increasing the number of features. Model selection is based on prediction accuracy of the CF, though it is shown to be consistent with factor analysis. In multidimensional cases the item parameterizations can be used in conjunction with cluster analysis to identify groups of items which measure different ability dimensions.",
"A major challenge for collaborative filtering (CF) techniques in recommender systems is the data sparsity that is caused by missing and noisy ratings. This problem is even more serious for CF domains where the ratings are expressed numerically, e.g. as 5-star grades. We assume the 5-star ratings are unordered bins instead of ordinal relative preferences. We observe that, while we may lack the information in numerical ratings, we sometimes have additional auxiliary data in the form of binary ratings. This is especially true given that users can easily express themselves with their preferences expressed as likes or dislikes for items. In this paper, we explore how to use these binary auxiliary preference data to help reduce the impact of data sparsity for CF domains expressed in numerical ratings. We solve this problem by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. In particular, our solution is to model both the numerical ratings and ratings expressed as like or dislike in a principled way. We present a novel framework of Transfer by Collective Factorization (TCF), in which we construct a shared latent space collectively and learn the data-dependent effect separately. A major advantage of the TCF approach over the previous bilinear method of collective matrix factorization is that we are able to capture the data-dependent effect when sharing the data-independent knowledge. This allows us to increase the overall quality of knowledge transfer. We present extensive experimental results to demonstrate the effectiveness of TCF at various sparsity levels, and show improvements of our approach as compared to several state-of-the-art methods.",
"We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators mapping a set of \"users\" to a set of possibly desired \"objects\". In particular, several recent low-rank type matrix-completion methods for CF are shown to be special cases of our proposed framework. Unlike existing regularization-based CF, our approach can be used to incorporate additional information such as attributes of the users objects---a feature currently lacking in existing regularization-based CF approaches---using popular and well-known kernel methods. We provide novel representer theorems that we use to develop new estimation methods. We then provide learning algorithms based on low-rank decompositions and test them on a standard CF data set. The experiments indicate the advantages of generalizing the existing regularization-based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.",
"Collaborative filtering (CF) has been widely employed within recommender systems to solve many real-world problems. Learning effective latent factors plays the most important role in collaborative filtering. Traditional CF methods based upon matrix factorization techniques learn the latent factors from the user-item ratings and suffer from the cold start problem as well as the sparsity problem. Some improved CF methods enrich the priors on the latent factors by incorporating side information as regularization. However, the learned latent factors may not be very effective due to the sparse nature of the ratings and the side information. To tackle this problem, we learn effective latent representations via deep learning. Deep learning models have emerged as very appealing in learning effective representations in many applications. In particular, we propose a general deep architecture for CF by integrating matrix factorization with deep feature learning. We provide a natural instantiations of our architecture by combining probabilistic matrix factorization with marginalized denoising stacked auto-encoders. The combined framework leads to a parsimonious fit over the latent features as indicated by its improved performance in comparison to prior state-of-art models over four large datasets for the tasks of movie book recommendation and response prediction.",
"Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recent advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art.",
"Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recently advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art."
]
} |
1907.12209 | 2966634313 | Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance. | Depth prediction from images is a long-standing problem. Previous work can be divided into active methods and passive methods. The former ones use the assistant optical information for prediction, such as coded patterns @cite_30 , while the latter ones completely focus on image contents. Monocular depth prediction @cite_5 @cite_50 @cite_22 @cite_13 @cite_14 has been extensively studied recently. As limited geometric information can be directly extracted from the monocular image, it is essentially an ill-posed problem. Recently, owing to the structural features from very deep convolution neural network, such as ResNet @cite_12 , various DCNN-based methods learn to predict depth with deep CNN features. Fu al @cite_16 proposed an encoder-decoder network, which extracts multi-scale features from the encoder and is trained in an end-to-end manner without iterative refinement. They achieved state-of-the-art performance on several datasets. Jiao al @cite_17 proposed an attention-driven loss, which merges the semantic priors to improve the prediction precision on unbalanced distribution datasets. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_50",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1803059841",
"2950619061",
"2125416623",
"2124907686",
"2300779272",
"2949634581",
"2798410215",
"2952280228",
"2399238887"
],
"abstract": [
"In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.",
"We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions, etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.",
"We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multi-layer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch . Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The method described in this paper achieves state-of-the-art results on four challenging benchmarks, i.e., KITTI [17], ScanNet [9], Make3D [50], and NYU Depth v2 [42], and win the 1st prize in Robust Vision Challenge 2018. Code has been made available at: this https URL",
"Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.",
"A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set."
]
} |
1907.12209 | 2966634313 | Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance. | Most previous methods only adopted the pixel-wise depth supervision to train a network. By contrast, Liu al @cite_19 combined DCNN with the continuous conditional random field (CRF) to exploit consistency information of neighbouring pixels. CRF establishes a pair-wise constraint for local regions. Furthermore, several high-order constraints are investigated. Chen al @cite_33 applied the generative adversarial training to lead the network to learn a context-aware and patch-level loss automatically. Note that most of these methods directly work with the depth, instead of in the 3D space. | {
"cite_N": [
"@cite_19",
"@cite_33"
],
"mid": [
"2964288706",
"1923697677"
],
"abstract": [
"Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU."
]
} |
1907.12237 | 2964535354 | This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art. | There are several methods focusing solely on the ROI localization. Tiulpin @cite_16 proposed a novel anatomical proposal method to localize the knee joint area. Antony @cite_19 used fully convolutional networks for the same problem. Recently, Chen @cite_14 proposed to use object detection methods to measure the knee OA severity. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_16"
],
"mid": [
"2604759322",
"2521048164",
"2159988341"
],
"abstract": [
"This paper introduces a new approach to automatically quantify the severity of knee OA using X-ray images. Automatically quantifying knee OA severity involves two steps: first, automatically localizing the knee joints; next, classifying the localized knee joint images. We introduce a new approach to automatically detect the knee joints using a fully convolutional neural network (FCN). We train convolutional neural networks (CNN) from scratch to automatically quantify the knee OA severity optimizing a weighted ratio of two loss functions: categorical cross-entropy and mean-squared loss. This joint training further improves the overall quantification of knee OA severity, with the added benefit of naturally producing simultaneous multi-class classification and regression outputs. Two public datasets are used to evaluate our approach, the Osteoarthritis Initiative (OAI) and the Multicenter Osteoarthritis Study (MOST), with extremely promising results that outperform existing approaches.",
"This paper proposes a new approach to automatically quantify the severity of knee osteoarthritis (OA) from radiographs using deep convolutional neural networks (CNN). Clinically, knee OA severity is assessed using Kellgren & Lawrence (KL) grades, a five point scale. Previous work on automatically predicting KL grades from radiograph images were based on training shallow classifiers using a variety of hand engineered features. We demonstrate that classification accuracy can be significantly improved using deep convolutional neural network models pre-trained on ImageNet and fine-tuned on knee OA images. Furthermore, we argue that it is more appropriate to assess the accuracy of automatic knee OA severity predictions using a continuous distance-based evaluation metric like mean squared error than it is to use classification accuracy. This leads to the formulation of the prediction of KL grades as a regression problem and further improves accuracy. Results on a dataset of X-ray images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable improvement over the current state-of-the-art.",
"Summary Objective To determine whether computer-based analysis can detect features predictive of osteoarthritis (OA) development in radiographically normal knees. Method A systematic computer-aided image analysis method weighted neighbor distances using a compound hierarchy of algorithms representing morphology (WND-CHARM) was used to analyze pairs of weight-bearing knee X-rays. Initial X-rays were all scored as normal Kellgren–Lawrence (KL) grade 0, and on follow-up approximately 20 years later either developed OA (defined as KL grade=2) or remained normal. Results The computer-aided method predicted whether a knee would change from KL grade 0 to grade 3 with 72 accuracy ( P P Conclusion Radiographic features detectable using a computer-aided image analysis method can predict the future development of radiographic knee OA."
]
} |
1907.12237 | 2964535354 | This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art. | The proposed approach is related to the regression-based methods for keypoint localization @cite_7 . We utilize an hourglass network which is an encoder-decoder model initially introduced for human pose estimation @cite_34 and address both ROI and landmark localization tasks. Several other studies in medical imaging domain also leveraged a similar approach by applying U-Net @cite_38 to the landmark localization problem @cite_37 @cite_0 . However, the encoder-decoder networks are computationally heavy during the training phase since they regress a tensor of high-resolution heatmaps which is challenging for medical images that are typically of a large size. It is notable that decreasing the image resolution could negatively impact the accuracy of landmark localization. In addition, most of the existing approaches use a refinement step which makes the computational burden even harder to cope with. Nevertheless, hourglass CNNs are widely used in human pose estimation @cite_34 due to a possibility of lowering down the resolution and the absence of precise ground truth. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_7",
"@cite_0",
"@cite_34"
],
"mid": [
"2925288829",
"2520402508",
"2773656608",
"2739492061",
"2964175348"
],
"abstract": [
"Abstract In many medical image analysis applications, only a limited amount of training data is available due to the costs of image acquisition and the large manual annotation effort required from experts. Training recent state-of-the-art machine learning methods like convolutional neural networks (CNNs) from small datasets is a challenging task. In this work on anatomical landmark localization, we propose a CNN architecture that learns to split the localization task into two simpler sub-problems, reducing the overall need for large training datasets. Our fully convolutional SpatialConfiguration-Net (SCN) learns this simplification due to multiplying the heatmap predictions of its two components and by training the network in an end-to-end manner. Thus, the SCN dedicates one component to locally accurate but ambiguous candidate predictions, while the other component improves robustness to ambiguities by incorporating the spatial configuration of landmarks. In our extensive experimental evaluation, we show that the proposed SCN outperforms related methods in terms of landmark localization error on a variety of size-limited 2D and 3D landmark localization datasets, i.e., hand radiographs, lateral cephalograms, hand MRIs, and spine CTs.",
"Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D high-resolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-the-art landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.",
"Image based localization is one of the important problems in computer vision due to its wide applicability in robotics, augmented reality, and autonomous systems. There is a rich set of methods described in the literature how to geometrically register a 2D image w.r.t. a 3D model. Recently, methods based on deep (and convolutional) feedforward networks (CNNs) became popular for pose regression. However, these CNN-based methods are still less accurate than geometry based methods despite being fast and memory efficient. In this work we design a deep neural network architecture based on sparse feature descriptors to estimate the absolute pose of an image. Our choice of using sparse feature descriptors has two major advantages: first, our network is significantly smaller than the CNNs proposed in the literature for this task---thereby making our approach more efficient and scalable. Second---and more importantly---, usage of sparse features allows to augment the training data with synthetic viewpoints, which leads to substantial improvements in the generalization performance to unseen poses. Thus, our proposed method aims to combine the best of the two worlds---feature-based localization and CNN-based pose regression--to achieve state-of-the-art performance in the absolute pose estimation. A detailed analysis of the proposed architecture and a rigorous evaluation on the existing datasets are provided to support our method.",
"We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.",
"We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Although convolutional neural networks (CNN) had achieved great success in analyzing 2D images, they cannot be directly applied to point clouds beacuse of its unorganized nature. Without a pixel-based neighborhood defined, vanilla CNNs cannot extract local information and gradually expand receptive field sizes in a meaningful manner. Thus, segmentation tasks were first performed in a way that simulate 2D scenarios – by fusing partial views represented with RGB-D images together @cite_4 @cite_16 @cite_25 @cite_12 . Some other work transform point clouds into cost-inefficient voxel representations on which CNN can be directly applied @cite_5 @cite_8 @cite_11 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_5",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2768623474",
"2258484932",
"2951309005",
"2432481613",
"2962914239",
"764651262",
"2783032472"
],
"abstract": [
"The need to model visual information with compact representations has existed since the early days of computer vision. We implemented in the past a segmentation and model recovery method for range images which is unfortunately too slow for current size of 3D point clouds and type of applications. Recently, neural networks have become the popular choice for quick and effective processing of visual data. In this article we demonstrate that with a convolutional neural network we could achieve comparable results, that is to determine and model all objects in a given 3D point cloud scene. We started off with a simple architecture that could predict the parameters of a single object in a scene. Then we expanded it with an architecture similar to Faster R-CNN, that could predict the parameters for any number of objects in a scene. The results of the initial neural network were satisfactory. The second network, that performed also segmentation, still gave decent results comparable to the original method, but compared to the initial one, performed somewhat worse. Results, however, are encouraging but further experiments are needed to build CNNs that will be able to replace the state-of-the-art method.",
"Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.",
"Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Conventional deep convolutional neural networks (CNNs) apply convolution operators uniformly in space across all feature maps for hundreds of layers - this incurs a high computational cost for real time applications. For many problems such as object detection and semantic segmentation, we are able to obtain a low-cost computation mask, either from a priori problem knowledge, or from a low resolution segmentation network. We show that such computation masks can be used to reduce computation in the high resolution main network. Variants of sparse activation CNNs have previously been explored on small scale tasks, and showed no degradation in terms of object classification accuracy, but often measured gains in terms of theoretical FLOPs without realizing a practical speed-up when compared to highly optimized dense convolution implementations. In this work, we leverage the sparsity structure of computation masks and propose a novel tiling-based sparse convolution algorithm. We verified the effectiveness of our sparse CNN on LiDAR based 3D object detection, and we report significant wall-clock speed-ups compared to dense convolution, as well as improved detection accuracy."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Although these methods did benefit from mature 2D image processing network structures, inefficient 3D data representations constrained them from showing good performance for scene segmentation, where it is necessary to deal with large, dense 3D scenes as a whole. Therefore, recent research gradually turned to networks that directly operate on point clouds when dealing with semantic segmentation for complex indoor outdoor scenes @cite_6 @cite_7 @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_6"
],
"mid": [
"2777686015",
"2796040722",
"2771796597"
],
"abstract": [
"Fully convolutional network (FCN) has been successfully applied in semantic segmentation of scenes represented with RGB images. Images augmented with depth channel provide more understanding of the geometric information of the scene in the image. The question is how to best exploit this additional information to improve the segmentation performance.,,In this paper, we present a neural network with multiple branches for segmenting RGB-D images. Our approach is to use the available depth to split the image into layers with common visual characteristic of objects scenes, or common “scene-resolution”. We introduce context-aware receptive field (CaRF) which provides a better control on the relevant contextual information of the learned features. Equipped with CaRF, each branch of the network semantically segments relevant similar scene-resolution, leading to a more focused domain which is easier to learn. Furthermore, our network is cascaded with features from one branch augmenting the features of adjacent branch. We show that such cascading of features enriches the contextual information of each branch and enhances the overall performance. The accuracy that our network achieves outperforms the stateof-the-art methods on two public datasets.",
"Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL",
"3D data such as point clouds and meshes are becoming more and more available. The goal of this paper is to obtain 3D object and scene classification and semantic segmentation. Because point clouds have irregular formats, most of the existing methods convert the 3D data into multiple 2D projection images or 3D voxel grids. These representations are suited as input of conventional CNNs but they either ignore the underlying 3D geometrical structure or are constrained by data sparsity and computational complexity. Therefore, recent methods encode the coordinates of each point cloud to certain high dimensional features to cover the 3D space. However, by their design, these models are not able to sufficiently capture the local patterns. In this paper, we propose a method that directly uses point clouds as input and exploits the implicit space partition of k-d tree structure to learn the local contextual information and aggregate features at different scales hierarchically. Extensive experiments on challenging benchmarks show that our proposed model properly captures the local patterns to provide discriminative point set features. For the task of 3D scene semantic segmentation, our method outperforms the state-of-the-art on the challenging Stanford Large-Scale 3D Indoor Spaces Dataset(S3DIS) by a large margin."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | As introduced, PointNet used multi-layer perception (which process each point independently) to fit the unordered nature of point clouds @cite_8 . Furthermore, similar approaches like using @math convolutional kernels @cite_10 , radius querying @cite_1 or nearest neighbor searching @cite_0 were also adopted. Because local dependencies were not effectively modeled, overfitting constantly occurred when these networks were used to perform large-scale scene segmentation. In addition, work like R-Conv @cite_9 tried to avoid time-consuming neighbor searching with global recurrent transformation prior to convolutional analysis. However, scalability problems still occurred as the global RNN cannot directly operate on the point cloud representing an entire dense scene, which often contains multiple million points. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_10"
],
"mid": [
"2900731076",
"2963053547",
"2796040722",
"2963336905",
"2963517242"
],
"abstract": [
"Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.",
"Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet",
"Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL",
"Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.",
"Point clouds are an efficient data format for 3D data. However, existing 3D segmentation methods for point clouds either do not model local dependencies [21] or require added computations [14, 23]. This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds. The key component of the RSNet is a lightweight local dependency module. It is a combination of a novel slice pooling layer, Recurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice pooling layer is designed to project features of unordered points onto an ordered sequence of feature vectors so that traditional end-to-end learning algorithms (RNNs) can be applied. The performance of RSNet is validated by comprehensive experiments on the S3DIS[1], ScanNet[3], and ShapeNet [34] datasets. In its simplest form, RSNets surpass all previous state-of-the-art methods on these benchmarks. And comparisons against previous state-of-the-art methods [21, 23] demonstrate the efficiency of RSNets."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Tangent Convolution @cite_7 proposed a way to efficiently model local dependencies and align convolutional filters on different scales. Their work is based on local covariance analysis and down-sampled neighborhood reconstruction with raw data points. Despite tangential convolution itself functioned well extracting local features, their network architecture was limited by static, uniform intermedium feature aggregation and a complete lack of global integration. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2949360407"
],
"abstract": [
"In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Several works turned to the global scale for permutation robustness. Its simplest form, global maximum pooling, only fulfilled light-weight tasks like object classification or part segmentation @cite_21 . Moreover, RNNs constructed with advance cells like Long-Short-Term-Memory @cite_28 or Gate-Recurrent-Unit @cite_19 offered promising results on scene segmentation @cite_26 , even for those architectures without significant consideration for local feature extraction @cite_15 @cite_14 . However, in those cases the global RNNs were built deep, bidirectional or compact with hidden units, giving out a strict limitation on the direct input. As a result, the original point cloud was often down-sampled to an extreme extent, or the network was only capable of operating on sections of the original point cloud @cite_15 . | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_28",
"@cite_21",
"@cite_19",
"@cite_15"
],
"mid": [
"2895472109",
"2963336905",
"2962883174",
"2754526845",
"2890832667",
"2950141408"
],
"abstract": [
"Semantic segmentation of 3D unstructured point clouds remains an open research problem. Recent works predict semantic labels of 3D points by virtue of neural networks but take limited context knowledge into consideration. In this paper, a novel end-to-end approach for unstructured point cloud semantic segmentation, named 3P-RNN, is proposed to exploit the inherent contextual features. First the efficient pointwise pyramid pooling module is investigated to capture local structures at various densities by taking multi-scale neighborhood into account. Then the two-direction hierarchical recurrent neural networks (RNNs) are utilized to explore long-range spatial dependencies. Each recurrent layer takes as input the local features derived from unrolled cells and sweeps the 3D space along two directions successively to integrate structure knowledge. On challenging indoor and outdoor 3D datasets, the proposed framework demonstrates robust performance superior to state-of-the-arts.",
"Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.",
"Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available.",
"Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is publicly available at this https URL",
"Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture. Additionally, RCRN achieves state-of-the-art results on several well-established datasets.",
"Recurrent Neural Network (RNN) is one of the most popular architectures used in Natural Language Processsing (NLP) tasks because its recurrent structure is very suitable to process variable-length text. RNN can utilize distributed representations of words by first converting the tokens comprising each text into vectors, which form a matrix. And this matrix includes two dimensions: the time-step dimension and the feature vector dimension. Then most existing models usually utilize one-dimensional (1D) max pooling operation or attention-based operation only on the time-step dimension to obtain a fixed-length vector. However, the features on the feature vector dimension are not mutually independent, and simply applying 1D pooling operation over the time-step dimension independently may destroy the structure of the feature representation. On the other hand, applying two-dimensional (2D) pooling operation over the two dimensions may sample more meaningful features for sequence modeling tasks. To integrate the features on both dimensions of the matrix, this paper explores applying 2D max pooling operation to obtain a fixed-length representation of the text. This paper also utilizes 2D convolution to sample more meaningful information of the matrix. Experiments are conducted on six text classification tasks, including sentiment analysis, question classification, subjectivity classification and newsgroup classification. Compared with the state-of-the-art models, the proposed models achieve excellent performance on 4 out of 6 tasks. Specifically, one of the proposed models achieves highest accuracy on Stanford Sentiment Treebank binary classification and fine-grained classification tasks."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Various works in this area aimed to promote existing supervised-learning networks as auto-encoders. For example, FoldingNet @cite_23 managed to learn global features of a 3D object through constructing a deformable 2D grid surface; PointWise @cite_20 considered theoretical smoothness of object surface; and, MortonNet @cite_17 learned compact local features by generating fractal space-filling curves and predicting its endpoint. Although features provided by these auto-encoders are reported to be beneficial, we do not adopt them into our network for a fair evaluation on the aggregation method we propose. | {
"cite_N": [
"@cite_20",
"@cite_23",
"@cite_17"
],
"mid": [
"2796426482",
"2778361827",
"2890018557"
],
"abstract": [
"Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http: www.merl.com research license#FoldingNet",
"Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised semantic learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based approach is proposed in the decoder, which folds a 2D grid onto the underlying 3D object surface of a point cloud. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Finally, this folding-based decoder is interpretable since the reconstruction could be viewed as a fine granular warping from the 2D grid to the point cloud surface.",
"Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods."
]
} |
1907.12022 | 2965190089 | Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods. | Different from the common usage of finding a rich, concise feature embedding, SO-Net @cite_10 unsupervised learned a self-organizing map (SOM) for feature extraction and aggregation. Despite its novelty, few performance improvements were observed even when compared to PointNet or OctNet @cite_24 . Possible reasons include their usage of the SOM and a lack of deep local and global analysis. | {
"cite_N": [
"@cite_24",
"@cite_10"
],
"mid": [
"2790466413",
"2963719584"
],
"abstract": [
"This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website. this https URL",
"This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1"
]
} |
1907.11941 | 2966630200 | Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access. | This paper focuses on the decrypting network traffic encrypted with ChaCha20-Poly1305 cipher. Prior studies have investigated potential vulnerabilities in cipher design and in cipher implementation. Researchers have found no vulnerabilities in ChaCha20 design. For example, differential attacks using techniques such as identifying significant key bits only succeeded with reduced cipher rounds and significant volumes of plaintext-ciphertext pairs @cite_8 @cite_34 . Combined linear and differential analysis improves performance, but is similarly restricted @cite_21 . | {
"cite_N": [
"@cite_34",
"@cite_21",
"@cite_8"
],
"mid": [
"2612613213",
"2768792654",
"1577801461"
],
"abstract": [
"The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function.",
"ChaCha is a family of stream ciphers that are very efficient on constrainted platforms. In this paper, we present electromagnetic side-channel analyses for two different software implementations of ChaCha20 on a 32-bit architecture: one compiled and another one directly written in assembly. On the device under test, practical experiments show that they have different levels of resistance to side-channel attacks. For the most leakage-resilient implementation, an analysis of the whole quarter round is required. To overcome this complication, we introduce an optimized attack based on a divide-and-conquer strategy named bricklayer attack.",
"The stream cipher Salsa20 was introduced by Bernstein in 2005 as a candidate in the eSTREAM project, accompanied by the reduced versions Salsa20 8 and Salsa20 12. ChaCha is a variant of Salsa20 aiming at bringing better diffusion for similar performance. Variants of Salsa20 with up to 7 rounds (instead of 20) have been broken by differential cryptanalysis, while ChaCha has not been analyzed yet. We introduce a novel method for differential cryptanalysis of Salsa20 and ChaCha, inspired by correlation attacks and related to the notion of neutral bits. This is the first application of neutral bits in stream cipher cryptanalysis. It allows us to break the 256-bit version of Salsa20 8, to bring faster attacks on the 7-round variant, and to break 6- and 7-round ChaCha. In a second part, we analyze the compression function Rumba, built as the XOR of four Salsa20 instances and returning a 512-bit output. We find collision and preimage attacks for two simplified variants, then we discuss differential attacks on the original version, and exploit a high-probability differential to reduce complexity of collision search from 2256to 279for 3-round Rumba. To prove the correctness of our approach we provide examples of collisions and near-collisions on simplified versions."
]
} |
1907.11941 | 2966630200 | Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access. | ChaCha20 implementations may be vulnerable to side-channel attacks. While the cipher design may prevent timing attacks @cite_19 , correlating power electromagnetic radiation when specific cryptographic activities are performed may leak key stream information @cite_7 @cite_10 . Engendering instruction skips, for example by using a laser or electromagnetic pulse, could potentially produce the key stream but timing the activity would be challenging @cite_28 . Furthermore, these approaches may be impractical in real-world scenarios. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_10",
"@cite_7"
],
"mid": [
"2768792654",
"2612613213",
"2336016133",
"2604967423"
],
"abstract": [
"ChaCha is a family of stream ciphers that are very efficient on constrainted platforms. In this paper, we present electromagnetic side-channel analyses for two different software implementations of ChaCha20 on a 32-bit architecture: one compiled and another one directly written in assembly. On the device under test, practical experiments show that they have different levels of resistance to side-channel attacks. For the most leakage-resilient implementation, an analysis of the whole quarter round is required. To overcome this complication, we introduce an optimized attack based on a divide-and-conquer strategy named bricklayer attack.",
"The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function.",
"Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standardized and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha was explained by Aumasson et?al. long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et?al. to provide a clearer insight of the existing attack (2248 complexity for ChaCha7, i.e.,?7 rounds) and show certain improvements (complexity around 2243) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2239). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20 8, i.e.,?8 rounds), we get improvement in complexity, reducing it to 2 245.5 from 2 247.2 reported by Aumasson et?al.",
"Time variation during program execution can leak sensitive information. Time variations due to program control flow and hardware resource contention have been used to steal encryption keys in cipher implementations such as AES and RSA. A number of approaches to mitigate timing-based side-channel attacks have been proposed including cache partitioning, control-flow obfuscation and injecting timing noise into the outputs of code. While these techniques make timing-based side-channel attacks more difficult, they do not eliminate the risks. Prior techniques are either too specific or too expensive, and all leave remnants of the original timing side channel for later attackers to attempt to exploit. In this work, we show that the state-of-the-art techniques in timing side-channel protection, which limit timing leakage but do not eliminate it, still have significant vulnerabilities to timing-based side-channel attacks. To provide a means for total protection from timing-based side-channel attacks, we develop Ozone, the first zero timing leakage execution resource for a modern microarchitecture. Code in Ozone execute under a special hardware thread that gains exclusive access to a single core's resources for a fixed (and limited) number of cycles during which it cannot be interrupted. Memory access under Ozone thread execution is limited to a fixed size uncached scratchpad memory, and all Ozone threads begin execution with a known fixed microarchitectural state. We evaluate Ozone using a number of security sensitive kernels that have previously been targets of timing side-channel attacks, and show that Ozone eliminates timing leakage with minimal performance overhead."
]
} |
1907.11941 | 2966630200 | Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access. | Cryptographic artefacts have been found in device memory. For instance, RSA keys may be discovered in virtual machine images @cite_32 @cite_1 . Studies have also discovered DES and AES cipher keys in cold-boot attacks @cite_17 , Skipjack and Twofish key blocks in virtual memory @cite_22 , and AES session keys in virtual memory @cite_3 . Although these approaches use entropy measures to determine possible keys, they do not decrypt ciphertext encrypted with ciphers such as AES in Counter mode and ChaCha20 which require nonces initialization vectors. This study builds on the TLSkex @cite_3 and MemDecrypt studies @cite_38 which used privileged monitors to extract identified virtual machine process memory to identify TLS 1.2 AES keys, and SSH AES keys and initialization vectors, respectively. Instead, this study uses a different algorithm to find ChaCha20 cipher keys and nonces in device memory enabling SSH and TLS sessions to be decrypted in a non-invasive manner. The approach may enable decryption of Adiantum encrypts @cite_12 , the Google disk-encryption algorithm based on XChaCha20, an extension to ChaCha20 and Salsa20 with a longer nonce @cite_33 . | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_33",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_12",
"@cite_17"
],
"mid": [
"2144007792",
"2935536571",
"2900861686",
"2172060328",
"2324374191",
"2103289002",
"1488058190",
"2765992693"
],
"abstract": [
"Cryptanalysis of ciphers usually involves massive computations. The security parameters of cryptographic algorithms are commonly chosen so that attacks are infeasible with available computing resources. Thus, in the absence of mathematical breakthroughs to a cryptanalytical problem, a promising way for tackling the computations involved is to build special-purpose hardware exhibiting a (much) better performance-cost ratio than off-the-shelf computers. This contribution presents a variety of cryptanalytical applications utilizing the cost-optimized parallel code breaker (COPACOBANA) machine, which is a high-performance low-cost cluster consisting of 120 field-programmable gate arrays (FPGAs). COPACOBANA appears to be the only such reconfigurable parallel FPGA machine optimized for code breaking tasks reported in the open literature. Depending on the actual algorithm, the parallel hardware architecture can outperform conventional computers by several orders of magnitude. In this work, we focus on novel implementations of cryptanalytical algorithms, utilizing the impressive computational power of COPACOBANA. We describe various exhaustive key search attacks on symmetric ciphers and demonstrate an attack on a security mechanism employed in the electronic passport (e-passport). Furthermore, we describe time-memory trade-off techniques that can, e.g., be used for attacking the popular A5 1 algorithm used in GSM voice encryption. In addition, we introduce efficient implementations of more complex cryptanalysis on asymmetric cryptosystems, e.g., elliptic curve cryptosystems (ECCs) and number cofactorization for RSA. Even though breaking RSA or elliptic curves with parameter lengths used in most practical applications is out of reach with COPACOBANA, our attacks on algorithms with artificially short bit lengths allow us to extrapolate more reliable security estimates for real-world bit lengths. This is particularly useful for deriving estimates about the longevity of asymmetric key lengths.",
"Abstract Decrypting and inspecting encrypted malicious communications may assist crime detection and prevention. Access to client or server memory enables the discovery of artefacts required for decrypting secure communications. This paper develops the MemDecrypt framework to investigate the discovery of encrypted artefacts in memory and applies the methodology to decrypting the secure communications of virtual machines. For Secure Shell, used for secure remote server management, file transfer, and tunnelling inter alia, MemDecrypt experiments rapidly yield AES-encrypted details for a live secure file transfer including remote user credentials, transmitted file name and file contents. Thus, MemDecrypt discovers cryptographic artefacts and quickly decrypts live SSH malicious communications including the detection and interception of data exfiltration of confidential data.",
"This paper demonstrates the improved power and electromagnetic (EM) side-channel attack (SCA) resistance of 128-bit Advanced Encryption Standard (AES) engines in 130-nm CMOS using random fast voltage dithering (RFVD) enabled by integrated voltage regulator (IVR) with the bond-wire inductors and an on-chip all-digital clock modulation (ADCM) circuit. RFVD scheme transforms the current signatures with random variations in AES input supply while adding random shifts in the clock edges in the presence of global and local supply noises. The measured power signatures at the supply node of the AES engines show upto 37 @math reduction in peak for higher order test vector leakage assessment (TVLA) metric and upto 692 @math increase in minimum traces required to disclose (MTD) the secret encryption key with correlation power analysis (CPA). Similarly, SCA on the measured EM signatures from the chip demonstrates a reduction of upto 11.3 @math in TVLA peak and upto 37 @math increase in correlation EM analysis (CEMA) MTD.",
"Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.",
"Abstract Nowadays, many applications by default use encryption of network traffic to achieve a higher level of privacy and confidentiality. One of the most frequently applied cryptographic protocols is Transport Layer Security (TLS). However, also adversaries make use of TLS encryption in order to hide attacks or command & control communication. For detecting and analyzing such threats, making the contents of encrypted communication available to security tools becomes essential. The ideal solution for this problem should offer efficient and stealthy decryption without having a negative impact on over-all security. This paper presents TLSkex (TLS Key EXtractor), an approach to extract the master key of a TLS connection at runtime from the virtual machine's main memory using virtual machine introspection techniques. Afterwards, the master key is used to decrypt the TLS session. In contrast to other solutions, TLSkex neither manipulates the network connection nor the communicating application. Thus, our approach is applicable for malware analysis and intrusion detection in scenarios where applications cannot be modified. Moreover, TLSkex is also able to decrypt TLS sessions that use perfect forward secrecy key exchange algorithms. In this paper, we define a generic approach for TLS key extraction based on virtual machine introspection, present our TLSkex prototype implementation of this approach, and evaluate the prototype.",
"We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.",
"We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.",
"In this paper, we propose an algorithm for constructing guess-and-determine attacks on keystream generators and apply it to the cryptanalysis of the alternating step generator (ASG) and two its modifications (MASG and MASG0). In a guess-and-determine attack, we first “guess” some part of an initial state and then apply some procedure to determine, if the guess was correct and we can use the guessed information to solve the problem, thus performing an exhaustive search over all possible assignments of bits forming a chosen part of an initial state. We propose to use in the “determine” part the algorithms for solving Boolean satisfiability problem (SAT). It allows us to consider sets of bits with nontrivial structure. For each such set it is possible to estimate the runtime of a corresponding guess-and-determine attack via the Monte-Carlo method, so we can search for a set of bits yielding the best attack via a black-box optimization algorithm augmented with several SAT-specific features. We constructed and implemented such attacks on ASG, MASG, and MASG0 to prove that the constructed runtime estimations are reliable. We show, that the constructed attacks are better than the trivial ones, which imply exhaustive search over all possible states of the control register, and present the results of experiments on cryptanalysis of ASG and MASG MASG0 with total registers length of 72 and 96, which have not been previously published in the literature."
]
} |
1907.11914 | 2965936015 | We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research. | Multi-stage object detectors are very popular in recent years. Following the main idea of divide and conquer', these detectors optimze a simpler problem first, and then refine the difficult progressively. In the field of object detection, cascade can be introduced into two components, namely the proposal generation process usually called RPN' and the classification and localization predicting process usually called R-CNN @cite_12 '. In these works, for the former, @cite_17 @cite_22 @cite_25 propose to use a multi-stage procedure to generate accurate proposals, and then refine these proposals with a single Fast R-CNN @cite_18 . For the latter, Cascade R-CNN @cite_8 is the most famous object detectors among them, with increasingly foreground background thresholds selected to refine the ROIs progressively. HTC @cite_7 follows this thoughts and propose to refine the features in an interleaved manner, resulting in state-of-the-art performance on instance segmentation task. Other works like @cite_2 @cite_19 also apply the R-CNN stage several times, but the performance is far behind Cascade R-CNN @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_19",
"@cite_2",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2589615404",
"2743620784",
"2963418361",
"2610420510",
"2339367607",
"2797527871",
"2964241181",
"2953226057",
"2410641892"
],
"abstract": [
"Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark.",
"The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.",
"Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.",
"Many modern approaches for object detection are two-staged pipelines. The first stage identifies regions of interest which are then classified in the second stage. Faster R-CNN is such an approach for object detection which combines both stages into a single pipeline. In this paper we apply Faster R-CNN to the task of company logo detection. Motivated by its weak performance on small object instances, we examine in detail both the proposal and the classification stage with respect to a wide range of object sizes. We investigate the influence of feature map resolution on the performance of those stages. Based on theoretical considerations, we introduce an improved scheme for generating anchor proposals and propose a modification to Faster R-CNN which leverages higher-resolution feature maps for small objects. We evaluate our approach on the FlickrLogos dataset improving the RPN performance from 0.52 to 0.71 (MABO) and the detection performance from 0.52 to @math (mAP).",
"Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework and its fast versions. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Region-proposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals; in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter- and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of-the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.",
"Recent CNN based object detectors, no matter one-stage methods like YOLO, SSD, and RetinaNe or two-stage detectors like Faster R-CNN, R-FCN and FPN are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. 1. Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. 2. Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet (4.8G FLOPs) backbone. The code will be released for the reproduction.",
"In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.",
"Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are \"deep in context\". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.",
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets."
]
} |
1907.11914 | 2965936015 | We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research. | Feature sharing has also been taken in many approaches. In @cite_14 , sharing features in RPN stage can improve the performance and similar results can be found in @cite_6 @cite_7 across different tasks. Different from these methods, our approach not only focuses on the overall improvements but also narrowing the gap without resorting to the commonly used testing ensemble in cascaded approaches but the network itself based on feature sharing. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_6"
],
"mid": [
"2500719210",
"2798791651",
"2495387757"
],
"abstract": [
"The focus of our work is speeding up evaluation of deep neural networks in retrieval scenarios, where conventional architectures may spend too much time on negative examples. We propose to replace a monolithic network with our novel cascade of feature-sharing deep classifiers, called OnionNet, where subsequent stages may add both new layers as well as new feature channels to the previous ones. Importantly, intermediate feature maps are shared among classifiers, preventing them from the necessity of being recomputed. To accomplish this, the model is trained end-to-end in a principled way under a joint loss. We validate our approach in theory and on a synthetic benchmark. As a result demonstrated in three applications (patch matching, object detection, and image retrieval), our cascade can operate significantly faster than both monolithic networks and traditional cascades without sharing at the cost of marginal decrease in precision.",
"Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.",
"Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image."
]
} |
1907.12051 | 2965243075 | Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC. | The probability of decoding a fraction of source packets in RLNC scheme has been a major research topic, in context of both performance @cite_22 and security @cite_2 . The authors of @cite_2 showed an upper bound for the possibility of decoding a fraction of source packets, while in @cite_22 , the authors derived exact expressions for the probability of partial decoding for RLNC. Unfortunately, none of these works can be extended to SNC scheme. @cite_5 , these expressions were used to study the security of RLNC in a multi-relay network. The authors of @cite_10 also found an exact expression for the probability of partial decoding in systematic RLNC. However, their analysis is only valid for Binary Galois Field and also can not be extended to SNC. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_22",
"@cite_2"
],
"mid": [
"2766982895",
"2951615571",
"2158058261",
"2017096524"
],
"abstract": [
"Opportunistic relaying has the potential to achieve full diversity gain, while random linear network coding (RLNC) can reduce latency and energy consumption. In recent years, there has been a growing interest in the integration of both schemes into wireless networks in order to reap their benefits, while considering security concerns. This paper considers a multi-relay network, where relay nodes employ RLNC to encode confidential data and transmit coded packets to a destination in the presence of an eavesdropper. Four relay selection protocols are studied covering a range of network capabilities, such as the availability of the eavesdropper’s channel state information or the possibility to pair the selected relay with a node that intentionally generates interference. For each case, expressions for the probability that a coded packet will not be recovered by a receiver, which can be either the destination or the eavesdropper, are derived. Based on those expressions, a framework is developed that characterizes the probability of the eavesdropper intercepting a sufficient number of coded packets and partially or fully recovering the confidential data. Simulation results confirm the validity and accuracy of the theoretical framework and unveil the security-reliability trade-offs attained by each RLNC-enabled relay selection protocol.",
"In an unreliable single-hop broadcast network setting, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size. Our model consists of a source transmitting packets of a single flow to a set of @math users over independent erasure channels. The source performs random linear network coding (RLNC) over @math (coding window size) packets and broadcasts them to the users. We note that the broadcast throughput of RLNC must vanish with increasing @math , for any fixed @math Hence, in contrast to other works in the literature, we investigate how the coding window size @math must scale for increasing @math . Our analysis reveals that the coding window size of @math represents a phase transition rate, below which the throughput converges to zero, and above which it converges to the broadcast capacity. Further, we characterize the asymptotic distribution of decoding delay and provide approximate expressions for the mean and variance of decoding delay for the scaling regime of @math These asymptotic expressions reveal the impact of channel correlations on the throughput and delay performance of RLNC. We also show how our analysis can be extended to other rateless block coding schemes such as the LT codes. Finally, we comment on the extension of our results to the cases of dependent channels across users and asymmetric channel model.",
"We study the application of convolutional codes to two-way relay networks (TWRNs) with physical-layer network coding (PNC). When a relay node decodes coded signals transmitted by two source nodes simultaneously, we show that the Viterbi algorithm (VA) can be used by approximating the maximum likelihood (ML) decoding for XORed messages as two-user decoding. In this setup, for given memory length constraint, the two source nodes can choose the same convolutional code that has the largest free distance in order to maximize the performance. Motivated from the fact that the relay node only needs to decode XORed messages, a low complexity decoding scheme is proposed using a reduced-state trellis. We show that the reduced-state decoding can achieve the same diversity gain as the full-state decoding for fading channels.",
"We investigate a channel-coded physical-layer network coding (CPNC) scheme for binary-input Gaussian two-way relay channels. In this scheme, the codewords of the two users are transmitted simultaneously. The relay computes and forwards a network-coded (NC) codeword without complete decoding of the two users' individual messages. We propose a new punctured codebook method to explicitly find the distance spectrum of the CPNC scheme. Based on that, we derive an asymptotically tight performance bound for the error probability. Our analysis shows that, compared to the single-user scenario, the CPNC scheme exhibits the same minimum Euclidean distance but an increased multiplicity of error events with minimum distance. At a high SNR, this leads to an SNR penalty of at most ln2 (in linear scale), for long channel codes of various rates. Our analytical results match well with the simulated performance."
]
} |
1907.12051 | 2965243075 | Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC. | Rateless codes such as LT and Raptor codes, can be considered as a binary implementation of SNC @cite_18 @cite_0 and partial decoding probability has been a major research topic in the literature of rateless codes. To mention a few, the authors of @cite_14 designed an algorithm for an optimal recovery rate, i.e. partial decoding probability in LT codes. However, the results of this work are asymptotically optimal and may only be employed for infinite number of source packets. @cite_17 , the authors provided a probability analysis for decoding a fraction of source packets based on the structure of the received coded packets. However, their analysis can not provide any probability for partial decoding, where the exact structure of the received coded packets are unknown. The authors of @cite_9 and @cite_28 , proposed algorithms to increase the probability of partial decoding for rateless codes. However, these works only increase the probability of partial decoding in the specific stages of the whole transmission and also can not be extended to non-binary Galois Fields. Also these algorithms are based on current coded packets received by the decoder and require huge computation overhead while transmitting coded packets. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_28",
"@cite_9",
"@cite_0",
"@cite_17"
],
"mid": [
"2134499105",
"2110464368",
"2963279554",
"2949343062",
"2158058261",
"2228116382"
],
"abstract": [
"In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Raptor codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the proposed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video audio streaming",
"The performance of a novel fountain coding scheme based on maximum distance separable (MDS) codes constructed over Galois fields of order q>=2 is investigated. Upper and lower bounds on the decoding failure probability under maximum likelihood decoding are developed. Differently from Raptor codes (which are based on a serial concatenation of a high-rate outer block code, and an inner Luby-transform code), the proposed coding scheme can be seen as a parallel concatenation of an outer MDS code and an inner random linear fountain code, both operating on the same Galois field. A performance assessment is performed on the gain provided by MDS based fountain coding over linear random fountain coding in terms of decoding failure probability vs. overhead. It is shown how, for example, the concatenation of a (15,10) Reed-Solomon code and a linear random fountain code over F16 brings to a decoding failure probability 4 orders of magnitude lower than the linear random fountain code for the same overhead in a channel with a packet loss probability of epsilon=0.05. Moreover, it is illustrated how the performance of the concatenated fountain code approaches that of an idealized fountain code for higher-order Galois fields and moderate packet loss probabilities. The scheme introduced is of special interest for the distribution of data using small block sizes.",
"We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process that adds the errors can be described by a sufficiently “simple” circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For two classes of channels, we provide explicit, efficiently encodable decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel. Unique decoding for additive errors: We give the first construction of a polynomial-time encodable decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1 − H(p). These are channels that add an arbitrary error vector e ∈ 0, 1 N of weight at most pN to the transmitted word; the vector e can depend on the code but not on the randomness of the encoder or the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases. List decoding for polynomial-time channels: For every constant c > 0, we construct codes with optimal rate (arbitrarily close to 1 − H(p)) that efficiently recover a short list containing the correct message with high probability for channels describable by circuits of size at most Nc. Our construction is not fully explicit but rather Monte Carlo (we give an algorithm that, with high probability, produces an encoder decoder pair that works for all time Nc channels). We are not aware of any channel models considered in the information theory literature other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list decoding with an impossibility result showing that, in a large range of parameters (p > 1 4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.",
"In this paper, collocated and distributed space-time block codes (DSTBCs) which admit multi-group maximum likelihood (ML) decoding are studied. First the collocated case is considered and the problem of constructing space-time block codes (STBCs) which optimally tradeoff rate and ML decoding complexity is posed. Recently, sufficient conditions for multi-group ML decodability have been provided in the literature and codes meeting these sufficient conditions were called Clifford Unitary Weight (CUW) STBCs. An algebraic framework based on extended Clifford algebras is proposed to study CUW STBCs and using this framework, the optimal tradeoff between rate and ML decoding complexity of CUW STBCs is obtained for few specific cases. Code constructions meeting this tradeoff optimally are also provided. The paper then focuses on multi-group ML decodable DSTBCs for application in synchronous wireless relay networks and three constructions of four-group ML decodable DSTBCs are provided. Finally, the OFDM based Alamouti space-time coded scheme proposed by Li-Xia for a 2 relay asynchronous relay network is extended to a more general transmission scheme that can achieve full asynchronous cooperative diversity for arbitrary number of relays. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Four-group decodable DSTBCs applicable in the proposed OFDM based transmission scheme are also given.",
"We study the application of convolutional codes to two-way relay networks (TWRNs) with physical-layer network coding (PNC). When a relay node decodes coded signals transmitted by two source nodes simultaneously, we show that the Viterbi algorithm (VA) can be used by approximating the maximum likelihood (ML) decoding for XORed messages as two-user decoding. In this setup, for given memory length constraint, the two source nodes can choose the same convolutional code that has the largest free distance in order to maximize the performance. Motivated from the fact that the relay node only needs to decode XORed messages, a low complexity decoding scheme is proposed using a reduced-state trellis. We show that the reduced-state decoding can achieve the same diversity gain as the full-state decoding for fading channels.",
"This paper focuses on low complexity successive cancellation list (SCL) decoding of polar codes. In particular, using the fact that splitting may be unnecessary when the reliability of decoding the unfrozen bit is sufficiently high, a novel splitting rule is proposed. Based on this rule, it is conjectured that, if the correct path survives at some stage, it tends to survive till termination without splitting with high probability. On the other hand, the incorrect paths are more likely to split at the following stages. Motivated by these observations, a simple counter that counts the successive number of stages without splitting is introduced for each decoding path to facilitate the identification of correct and incorrect paths. Specifically, any path with counter value larger than a predefined threshold @math is deemed to be the correct path, which will survive at the decoding stage, while other paths with counter value smaller than the threshold will be pruned, thereby reducing the decoding complexity. Furthermore, it is proved that there exists a unique unfrozen bit @math , after which the successive cancellation decoder achieves the same error performance as the maximum likelihood decoder if all the prior unfrozen bits are correctly decoded, which enables further complexity reduction. Simulation results demonstrate that the proposed low complexity SCL decoder attains performance similar to that of the conventional SCL decoder, while achieving substantial complexity reduction."
]
} |
1907.12051 | 2965243075 | Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC. | Another interesting research is Instantly Decodable Network Coding (IDNC) @cite_12 . In this scheme, the packets are sent in a way that an instant decoding of at least one of the source packets is guaranteed. However, the analysis on the partial decoding probability of this family of codes, such as @cite_16 and @cite_24 , is only valid for binary Galois Field and the presence of feedback in the system. Moreover, the IDNC family of codes heavily rely on feedbacks, though in our work, the system is considered to be feedback-free. | {
"cite_N": [
"@cite_24",
"@cite_16",
"@cite_12"
],
"mid": [
"2030775967",
"1485005937",
"1972328812"
],
"abstract": [
"This paper investigates the use of instantly decodable network coding (IDNC) for minimizing the mean decoding delay in multicast cooperative data exchange systems, where the clients cooperate with each other to obtain their missing packets. Here, IDNC is used to reduce the decoding delay of each transmission across all clients. We first introduce a new framework to find the optimum client and coded packet that result in the minimum mean decoding delay. However, since finding the optimum solution of the proposed framework is NP-hard, we further propose a heuristic algorithm that aims to minimize the lower bound on the expected decoding delay in each transmission. The effectiveness of the proposed algorithm is assessed through simulations.",
"In this paper, we consider the problem of min- imizing the multicast decoding delay of generalized instantly decodable network coding (G-IDNC) over persistent forward and feedback erasure channels with feedback intermittence. In such an environment, the sender does not always receive acknowledgement from the receivers after each transmission. Moreover, both the forward and feedback channels are subject to persistent erasures, which can be modelled by a two state (good and bad states) Markov chain known as Gilbert-Elliott channel (GEC). Due to such feedback imperfections, the sender is unable to determine subsequent instantly decodable packets combination for all receivers. Given this harsh channel and feedback model, we first derive expressions for the probability distributions of decoding delay increments and then employ these expressions in formulating the minimum decoding problem in such environment as a maximum weight clique problem in the G-IDNC graph. We also show that the problem formulations in simpler channel and feedback models are special cases of our generalized formulation. Since this problem is NP-hard, we design a greedy algorithm to solve it and compare it to blind approaches proposed in literature. Through extensive simulations, our adaptive algorithm is shown to outperform the blind approaches in all situations and to achieve significant improvement in the decoding delay, especially when the channel is highly persistent. Index Terms—Multicast Channels, Persistent Erasure Chan- nels, G-IDNC, Decoding Delay, Lossy Intermittent Feedback, Maximum Weight Clique Problem.",
"This work aims at introducing two novel packet retransmission techniques for reliable multicast in the framework of Instantly Decodable Network Coding (IDNC). These methods are suitable for order- and delay-sensitive applications, where some information is of high importance for an earlier gain at the receiver's side. We introduce hence an Unequal Error Protection (UEP) scheme, showing by simulations that the Quality of Experience (QoE) for the end-users is improved even without complex encoding and decoding."
]
} |
1907.12051 | 2965243075 | Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC. | The authors of @cite_13 and @cite_19 introduced and analyzed an improvement for the SNC scheme called perpetual coding. However, this coding scheme is not completely random and uses a structured type of coding to send packets, and the analysis on this coding scheme can not be extended to random SNC scheme, which is the scope of this paper. | {
"cite_N": [
"@cite_19",
"@cite_13"
],
"mid": [
"2017096524",
"1575671689"
],
"abstract": [
"We investigate a channel-coded physical-layer network coding (CPNC) scheme for binary-input Gaussian two-way relay channels. In this scheme, the codewords of the two users are transmitted simultaneously. The relay computes and forwards a network-coded (NC) codeword without complete decoding of the two users' individual messages. We propose a new punctured codebook method to explicitly find the distance spectrum of the CPNC scheme. Based on that, we derive an asymptotically tight performance bound for the error probability. Our analysis shows that, compared to the single-user scenario, the CPNC scheme exhibits the same minimum Euclidean distance but an increased multiplicity of error events with minimum distance. At a high SNR, this leads to an SNR penalty of at most ln2 (in linear scale), for long channel codes of various rates. Our analytical results match well with the simulated performance.",
"We propose and design a practical modulation-coded (MC) physical-layer network coding (PNC) scheme to approach the capacity limits of Gaussian and fading two-way relay channels (TWRCs). In the proposed scheme, an irregular repeat–accumulate (IRA) MC over @math with the same random coset is employed at two users, which directly maps the message sequences into coded PAM or QAM symbol sequences. The relay chooses appropriate network coding coefficients and computes the associated finite-field linear combinations of the two users' message sequences using an iterative belief propagation algorithm. For a symmetric Gaussian TWRC, we show that, by introducing the same random coset vector at the two users and a time-varying accumulator in the IRA code, the MC-PNC scheme exhibits symmetry and permutation-invariant properties for the soft information distribution of the network-coded message sequence (NCMS). We explore these properties in analyzing the convergence behavior of the scheme and optimizing the MC to approach the capacity limit of a TWRC. For a block fading TWRC, we present a new MC linear PNC scheme and an algorithm used at the relay for computing the NCMS. We demonstrate that our developed schemes achieve near-capacity performance in both Gaussian and Rayleigh fading TWRCs. For example, our designed codes over GF(7) and GF(3) with a code rate of 3 4 are within 1 and 1.2 dB of the TWRC capacity, respectively. Our method can be regarded as a practical embodiment of the notion of compute-and-forward with a good nested lattice code, and it can be applied to a wide range of network configurations."
]
} |
1907.11907 | 2966776769 | Lemmatization, finding the basic morphological form of a word in a corpus, is an important step in many natural language processing tasks when working with morphologically rich languages. We describe and evaluate Nefnir, a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that for correctly tagged text, Nefnir obtains an accuracy of 99.55 , and for text tagged with a PoS tagger, the accuracy obtained is 96.88 . | Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon @cite_6 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2120861206"
],
"abstract": [
"In spite of their superior performance, neural probabilistic language models (NPLMs) remain far less widely used than n-gram models due to their notoriously long training times, which are measured in weeks even for moderately-sized datasets. Training NPLMs is computationally expensive because they are explicitly normalized, which leads to having to consider all words in the vocabulary when computing the log-likelihood gradients. We propose a fast and simple algorithm for training NPLMs based on noise-contrastive estimation, a newly introduced procedure for estimating unnormalized continuous distributions. We investigate the behaviour of the algorithm on the Penn Treebank corpus and show that it reduces the training times by more than an order of magnitude without affecting the quality of the resulting models. The algorithm is also more efficient and much more stable than importance sampling because it requires far fewer noise samples to perform well. We demonstrate the scalability of the proposed approach by training several neural language models on a 47M-word corpus with a 80K-word vocabulary, obtaining state-of-the-art results on the Microsoft Research Sentence Completion Challenge dataset."
]
} |
1907.12108 | 2965617855 | In this paper, we present an end-to-end empathetic conversation agent CAiRE. Our system adapts TransferTransfo (, 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection. We evaluate our model on the recently proposed empathetic-dialogues dataset (, 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection and empathetic response generation. | Previous work @cite_12 @cite_5 @cite_0 showed that leveraging a large amount of data to learn context-sensitive features from a language model can create state-of-the-art models for a wide range of tasks. Taking this further, deployed higher capacity models and improved the state-of-the-art results. In this paper, we build the empathetic chatbot based on the pre-trained language model and achieve state-of-the-art results on dialogue emotion detection and empathetic response generation. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_12"
],
"mid": [
"2950813464",
"2773498419",
"2896457183"
],
"abstract": [
"With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.",
"Probabilistic graphical models, such as partially observable Markov decision processes (POMDPs), have been used in stochastic spoken dialog systems to handle the inherent uncertainty in speech recognition and language understanding. Such dialog systems suffer from the fact that only a relatively small number of domain variables are allowed in the model, so as to ensure the generation of good-quality dialog policies. At the same time, the non-language perception modalities on robots, such as vision-based facial expression recognition and Lidar-based distance detection, can hardly be integrated into this process. In this paper, we use a probabilistic commonsense reasoner to “guide” our POMDP-based dialog manager, and present a principled, multimodal dialog management (MDM) framework that allows the robot's dialog belief state to be seamlessly updated by both observations of human spoken language, and exogenous events such as the change of human facial expressions. The MDM approach has been implemented and evaluated both in simulation and on a real mobile robot using guidance tasks.",
"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."
]
} |
1907.11866 | 2966500445 | Wirelessly powered backscatter communication (WPBC) has been identified as a promising technology for low-power communication systems, which can reap the benefits of energy beamforming to improve energy transfer efficiency. Existing studies on energy beamforming fail to simultaneously take energy supply and information transfer in WPBC into account. This paper takes the first step to fill this gap, by considering the trade-off between the energy harvesting rate and achievable rate using estimated backscatter channel state information (BS-CSI). To ensure reliable communication and user fairness, we formulate the energy beamforming design as a max-min optimization problem by maximizing the minimum achievable rate for all tags subject to the energy constraint. We derive the closed-form expression of the energy harvesting rate, as well as the lower bound of the ergodic achievable rate. Our numerical results indicate that our scheme can significantly outperform state-of-the-art energy beamforming schemes. Additionally, the proposed scheme achieves performance comparable to that obtained via beamforming with perfect CSI. | Significant progress has recently been made on the energy beamforming in wireless powered communication @cite_5 @cite_1 @cite_2 @cite_9 . To harness the benefits of energy beamforming, there have been many efforts made to enable energy beamforming in WPBC. @cite_0 focus on optimizing the transmit beamforming to maximize the sum rate of a cooperative WPBC system. @cite_8 investigate energy beamforming in a relay WPBC system. Both these studies assume that F-CSI can be known to achieve energy beamforming. However, the closed-loop propagation and power-limited tags make it hard to obtain F-CSI. Instead, @cite_10 performs energy beamforming via the estimated BS-CSI to improve WET efficiency. Departure from these studies, our work considers the beamforming design for both energy supply and data transfer to ensure communication performance. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_10"
],
"mid": [
"2964056649",
"2002932684",
"2584574674",
"2760327456",
"2139569912",
"2101261257",
"2051577697"
],
"abstract": [
"We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10 , compared to that obtained using the perfect F-CSI.",
"In this letter, we study the robust beamforming problem for the multi-antenna wireless broadcasting system with simultaneous information and power transmission, under the assumption of imperfect channel state information (CSI) at the transmitter. Following the worst-case deterministic model, our objective is to maximize the worst-case harvested energy for the energy receiver while guaranteeing that the rate for the information receiver is above a threshold for all possible channel realizations. Such problem is nonconvex with infinite number of constraints. Using certain transformation techniques, we convert this problem into a relaxed semidefinite programming problem (SDP) which can be solved efficiently. We further show that the solution of the relaxed SDP problem is always rank-one. This indicates that the relaxation is tight and we can get the optimal solution for the original problem. Simulation results are presented to validate the effectiveness of the proposed algorithm.",
"This paper analyzes the performance of information and energy beamforming in multiple-input multiple- output (MIMO) wireless communications systems, where a self-powered multi-antenna hybrid access point (AP) coordinates wireless information and power transfer (WIPT) with an energy-constrained multi-antenna user terminal (UT). The wirelessly powered UT scavenge energy from the hybrid AP radio-frequency (RF) signal in the downlink (DL) using the harvest-then-transmit protocol, then uses the harvested energy to send its information to the hybrid AP in the uplink (UL). To maximize the overall signal-to-noise ratio (SNR) as well as the harvested energy so as to mitigate the severe effects of fading and enable long-distance wireless power transfer, information and energy beamforming is investigated by steering the transmitted information and energy signals along the strongest eigenmode. To this end, exact and lower-bound expressions for the outage probability and ergodic capacity are presented in closed-form, through which the throughput of the delay- constrained and delay-tolerant transmission modes are analyzed, respectively. Numerical results sustained by Monte Carlo simulations show the exactness and tightness of the proposed analytical expressions. The impact of various parameters such as energy harvesting time, hybrid AP transmit power and the number of antennas on the system throughput is also considered.",
"This paper investigates the optimal resource allocation in wireless powered communication network with user cooperation, where two single-antenna users first harvest energy from the signals transmitted by a multi-antenna hybrid access point (H-AP) and then cooperatively send information to the H-AP using their harvested energy. To explore the system information transmission performance limit, an optimization problem is formulated to maximize the weighted sum-rate (WSR) by jointly optimizing energy beamforming vector, time assignment, and power allocation. Besides, another optimization problem is also formulated to minimize the total transmission time for given amount of data required to be transmitted at the two sources. Because both problems are non-convex, we first transform them to be convex by using proper variable substitutions and then apply semi-definite relaxation to solve them. We theoretically prove that our proposed methods guarantee the global optimum of both problems. Simulation results show that system WSR and transmission time can be significantly enhanced by using energy beamforming and user cooperation. It is observed that when the total amount of information of two users is fixed, with the increase of the information amount of the user relatively farther away from the H-AP, the transmission time of the user cooperation scheme decreases while that of the direct transmission increases. Besides, the effects of user position on the system performances are also discussed, which provides some useful insights.",
"In this work, we consider a multiple-input single-output system in which an access point (AP) performs a simultaneous wireless information and power transfer (SWIPT) to serve a user terminal (UT) that is not equipped with external power supply. To assess the efficacy of the SWIPT, we target a practically relevant scenario characterized by imperfect channel state information (CSI) at the transmitter, the presence of penalties associated to the CSI acquisition procedures, and non-zero power consumption for the operations performed by the UT, such as CSI estimation, uplink signaling and data decoding. We analyze three different cases for the CSI knowledge at the AP: no CSI, and imperfect CSI in case of time-division duplexing and frequency-division duplexing communications. Closed-form representations of the ergodic downlink rate and both the energy shortage and data outage probability are derived for the three cases. Additionally, analytic expressions for the ergodically optimal duration of power transfer and channel estimation feedback phases are provided. Our numerical findings verify the correctness of our derivations, and also show the importance and benefits of CSI knowledge at the AP in SWIPT systems, albeit imperfect and acquired at the expense of the time available for the information transfer.",
"This paper considers the problem of power management and throughput maximization for energy neutral operation when using an energy harvesting sensor (EHS) to send data over a wireless link. The EHS is assumed to be able to harvest energy at a constant rate, and use a fixed part of the energy harvested in a slot for measuring the channel state. The rest of the energy harvested is available for transmission, however, it can be stored in an inefficient battery if it is not fully utilized. The key constraint that the EHS needs to satisfy is energy neutrality, i.e., the expected energy drawn from the battery should equal the expected energy deposited into the battery. In this scenario, two popular models for data transmission are contrasted: the constant bit rate (CBR) model and the variable bit rate (VBR) model. In the CBR model, it is assumed that the EHS are designed to transmit data at a constant rate (using a fixed modulation and coding scheme) but are power-controlled. In the VBR model, the EHS selects both the transmit power and the data rate of transmission in each slot based on the channel instantiation. A framework under which the system designer can optimize several parameters of the EHS that determine the average data rate performance when the channel is Rayleigh fading is developed. Using this framework, the two transmission schemes are contrasted. It is shown that, with the right choice of parameter settings, the CBR scheme can perform nearly as well as the VBR scheme at significantly lower complextiy. The usefulness and validity of the framework developed is illustrated through simulations for specific examples.",
"We study in this paper the network spectral efficiency of a multiple-input multiple-output (MIMO) ad hoc network with K simultaneous communicating transmitter-receiver pairs. Assuming that each transmitter is equipped with t antennas and each receiver with r antennas and each receiver implements single-user detection, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network spectral efficiency is limited by r nats s Hz as Krarrinfin and is independent of t and the transmit power. With CSI corresponding to the intended receiver available at the transmitter, we demonstrate that the asymptotic spectral efficiency is at least t+r+2radictr nats s Hz. Asymptotically optimum signaling is also derived under the same CSI assumption, i.e., each transmitter knows the channel corresponding to its desired receiver only. Further capacity improvement is possible with stronger CSI assumption; we demonstrate this using a heuristic interference suppression transmit beamforming approach. The conventional orthogonal transmission approach is also analyzed. In particular, we show that with idealized medium access control, the channelized transmission has unbounded asymptotic spectral efficiency under the constant per-user power constraint. The impact of different power constraints on the asymptotic spectral efficiency is also carefully examined. Finally, numerical examples are given that confirm our analysis"
]
} |
1907.11817 | 2954117579 | Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. | Clones can be broadly categorized into four types based on the nature of their similarity @cite_40 @cite_42 @cite_5 @cite_48 . : are clone pairs that are identical to each other with no modification to the source code. : are clone pairs that are only different in literals and variable types. : are renamed clone pairs with some structural modifications such as additions, deletions, and rearrangement of statements. : are clone pairs that have different syntax but perform the same functionality (i.e., semantically equivalent). These typically are most challenging to find and identify, yet, they are the more relevant in the context of ERP systems @cite_12 @cite_23 . | {
"cite_N": [
"@cite_48",
"@cite_42",
"@cite_40",
"@cite_23",
"@cite_5",
"@cite_12"
],
"mid": [
"2171368158",
"1508590353",
"2114056383",
"1698439592",
"2807866521",
"2584966780"
],
"abstract": [
"Code clones are defined to be the exactly or nearly similar code fragments in a software system's code-base. The existing clone related studies reveal that code clones are likely to introduce bugs and inconsistencies in the code-base. However, although there are different types of clones, it is still unknown which types of clones have a higher likeliness of introducing bugs to the software systems and so, should be considered more important for managing with techniques such as refactoring or tracking. With this focus, we performed a study that compared the bug-proneness of the major clone-types: Type 1, Type 2, and Type 3. According to our experimental results on thousands of revisions of seven diverse subject systems, Type 3 clones exhibit the highest bug-proneness among the three clone-types. The bug-proneness of Type 1 clones is the lowest. Also, Type 3 clones have the highest likeliness of being co-changed consistently while experiencing bug-fixing changes. Moreover, the Type 3 clones that experience bug-fixes have a higher possibility of evolving following a Similarity Preserving Change Pattern (SPCP) compared to the bug-fix clones of the other two clone-types. From the experimental results it is clear that Type 3 clones should be given a higher priority than the other two clone-types when making clone management decisions. We believe that our study provides useful implications for ranking clones for refactoring and tracking.",
"SUMMARY Two similar code segments, or clones, form a clone pair within a software system. The changes to the clones over time create a clone evolution history. In this work, we study late propagation, a specific pattern of clone evolution. In late propagation, one clone in a clone pair is modified, causing the clone pair to diverge. The code segments are then reconciled in a later commit. Existing work has established late propagation as a clone evolution pattern and suggested that the pattern is related to a high number of faults. In this study, we examine the characteristics of late propagation in three long-lived software systems using the Simian ( Simon Harris, Victoria, Australia, http: www.harukizaemon.com simian), CCFinder, and NiCad (Software Technology Laboratory, Queen's University, Kingston, ON, Canada) clone detection tools. We define eight types of late propagation and compare them to other forms of clone evolution. Our results not only verify that late propagation is more harmful to software systems but also establish that some specific types of late propagations are more harmful than others. Specifically, two types are most risky: (1) when a clone experiences diverging changes and then a reconciling change without any modification to the other clone in a clone pair; and (2) when two clones undergo a diverging modification followed by a reconciling change that modifies both the clones in a clone pair. We also observe that the reconciliation in the former case is more prone to faults than in the latter case. We determine that the size of the clones experiencing late propagation has an effect on the fault proneness of specific types of late propagation genealogies. Lastly, we cannot report a correlation between the delay of the propagation of changes and its faults, as the fault proneness of each delay period is system dependent. Copyright © 2013 John Wiley & Sons, Ltd.",
"Code Clones - duplicated source fragments - are said to increase maintenance effort and to facilitate problems caused by inconsistent changes to identical parts. While this is certainly true for some clones and certainly not true for others, it is unclear how many clones are real threats to the system's quality and need to be taken care of. Our analysis of clone evolution in mature software projects shows that most clones are rarely changed and the number of unintentional inconsistent changes to clones is small. We thus have to carefully select the clones to be managed to avoid unnecessary effort managing clones with no risk potential.",
"This paper presents a technique to automatically identify duplicate and near duplicate functions in a large software system. The identification technique is based on metrics extracted from the source code using the tool Datrix sup TM . This clone identification technique uses 21 function metrics grouped into four points of comparison. Each point of comparison is used to compare functions and determine their cloning level. An ordinal scale of eight cloning levels is defined. The levels range from an exact copy to distinct functions. The metrics, the thresholds and the process used are fully described. The results of applying the clone detection technique to two telecommunication monitoring systems totaling one million lines of source code are provided as examples. The information provided by this study is useful in monitoring the maintainability of large software systems.",
"Source code clones are categorized into four types of increasing difficulty of detection, ranging from purely textual (Type-1) to purely semantic (Type-4). Most clone detectors reported in the literature work well up to Type-3, which accounts for syntactic differences. In between Type-3 and Type-4, however, there lies a spectrum of clones that, although still exhibiting some syntactic similarities, are extremely hard to detect – the Twilight Zone. Most clone detectors reported in the literature fail to operate in this zone. We present Oreo, a novel approach to source code clone detection that not only detects Type-1 to Type-3 clones accurately, but is also capable of detecting harder-to-detect clones in the Twilight Zone. Oreo is built using a combination of machine learning, information retrieval, and software metrics. We evaluate the recall of Oreo on BigCloneBench, and perform manual evaluation for precision. Oreo has both high recall and precision. More importantly, it pushes the boundary in detection of clones with moderate to weak syntactic similarity in a scalable manner",
"If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. In this paper, we present a machine learning framework to automatically detect clones in software, which is able to detect Types-3 and the most complicated kind of clones, Type-4 clones. Previously used traditional features are often weak in detecting the semantic clones The novel aspects of our approach are the extraction of features from abstract syntax trees (AST) and program dependency graphs (PDG), representation of a pair of code fragments as a vector and the use of classification algorithms. The key benefit of this approach is that our approach can find both syntactic and semantic clones extremely well. Our evaluation indicates that using our new AST and PDG features is a viable methodology, since they improve detecting clones on the IJaDataset 2.0."
]
} |
1907.11817 | 2954117579 | Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. | Several approaches have been proposed in the literature to identify similar source code ranging from textual to semantic similarity identification. Generally, they're classified based on the source representations they work with. In , the raw source code, with minimal transformation, is used to perform a pairwise comparison to identify similar source code @cite_27 . on the other hand, extracts a sequence of tokens using compiler-style source code transformation @cite_25 . The sequence is then used to match tokens and identify duplicates in the repository and the corresponding original code is returned as clones. In , the code is transformed to Abstract Syntax Trees (ASTs) that are then used in tree sub matching algorithms to identify similar sub trees @cite_16 . Similarly, clone detection is expressed as graph matching problem for Program Dependence Graphs (PDGs) in @cite_36 . extracts a number of metrics from the source code fragments and then compare metrics rather than code or trees to identify similar code @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_27",
"@cite_16",
"@cite_25"
],
"mid": [
"2125260159",
"2584966780",
"2104609444",
"2157532207",
"2511803001"
],
"abstract": [
"Several techniques have been developed for identifying similar code fragments in programs. These similar fragments, referred to as code clones, can be used to identify redundant code, locate bugs, or gain insight into program design. Existing scalable approaches to clone detection are limited to finding program fragments that are similar only in their contiguous syntax. Other, semantics-based approaches are more resilient to differences in syntax, such as reordered statements, related statements interleaved with other unrelated statements, or the use of semantically equivalent control structures. However, none of these techniques have scaled to real world code bases. These approaches capture semantic information from Program Dependence Graphs (PDGs), program representations that encode data and control dependencies between statements and predicates. Our definition of a code clone is also based on this representation: we consider program fragments with isomorphic PDGs to be clones. In this paper, we present the first scalable clone detection algorithm based on this definition of semantic clones. Our insight is the reduction of the difficult graph similarity problem to a simpler tree similarity problem by mapping carefully selected PDG subgraphs to their related structured syntax. We efficiently solve the tree similarity problem to create a scalable analysis. We have implemented this algorithm in a practical tool and performed evaluations on several million-line open source projects, including the Linux kernel. Compared with previous approaches, our tool locates significantly more clones, which are often more semantically interesting than simple copied and pasted code fragments.",
"If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. In this paper, we present a machine learning framework to automatically detect clones in software, which is able to detect Types-3 and the most complicated kind of clones, Type-4 clones. Previously used traditional features are often weak in detecting the semantic clones The novel aspects of our approach are the extraction of features from abstract syntax trees (AST) and program dependency graphs (PDG), representation of a pair of code fragments as a vector and the use of classification algorithms. The key benefit of this approach is that our approach can find both syntactic and semantic clones extremely well. Our evaluation indicates that using our new AST and PDG features is a viable methodology, since they improve detecting clones on the IJaDataset 2.0.",
"While finding clones in source code has drawn considerable attention, there has been only very little work in finding similar fragments in binary code and intermediate languages, such as Java bytecode. Some recent studies showed that it is possible to find distinct sets of clone pairs in bytecode representation of source code, which are not always detectable at source code-level. In this paper, we present a bytecode clone detection approach, called SeByte, which exploits the benefits of compilers (the bytecode representation) for detecting a specific type of semantic clones in Java bytecode. SeByte is a hybrid metric-based approach that takes advantage of both, Semantic Web technologies and Set theory. We use a two-step analysis process: (1) Pattern matching via Semantic Web querying and reasoning, and (2) Content matching, using Jaccard coefficient for set similarity measurement. Semantic Web-based pattern matching helps us to find method blocks which share similar patterns even in case of extreme dissimilarity (e.g., numerous repetitions or large gaps). Although it leads to high recall, it gives high false positive rate. We thus use the content matching (via Jaccard) to reduce false positive rate by focusing on content semantic resemblance. Our evaluation of four Java systems and five other tools shows that SeByte can detect a large number of semantic clones that are either not detected or supported by source code based clone detectors.",
"Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.",
"Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers."
]
} |
1907.11817 | 2954117579 | Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. | Generally, similar code identification techniques work at varying level of granularity. detection leverages tokens, statements and lines as the basis for detection and comparison @cite_32 . On the other hand, detection uses functions, methods, classes, or program files as the basic units of detection @cite_40 . Naturally, the finer the granularity of the tool is, the longer time it takes to find clone candidates. Equally, the larger the granularity of the tool is, the faster time it takes for detection, albeit with fewer detected clones @cite_4 . Detection tools have therefore to make design trade-offs between accuracy and performance on an almost constant basis based on the code base being examined. | {
"cite_N": [
"@cite_40",
"@cite_4",
"@cite_32"
],
"mid": [
"2511803001",
"2157532207",
"2547865220"
],
"abstract": [
"Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.",
"Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.",
"If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. Coarse-grained clone detectors have higher precision than fine-grained, but fine-grained detectors have higher recall than coarse-grained. In this paper, we present a hybrid clone detection technique that first uses a coarse-grained technique to analyze clones effectively to improve precision. Subsequently, we use a fine-grained detector to obtain additional information about the clones and to improve recall. Our method detects Type-1 and Type-2 clones using hash values for blocks, and gapped code clones (Type-3) using block detection and subsequent comparison between them using Levenshtein distance and Cosine measures with varying thresholds."
]
} |
1907.11817 | 2954117579 | Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. | Another challenge in finding duplicate code is the performance of querying and retrieving possible matches from a large code base. Fingerprinting and hashing have been used to improve the search efficiency @cite_44 . Hashing maps variable size source code to a fixed size fingerprint that can later be used to query and search for clones in linear time @cite_14 . However, a simple match doesn't work well for inexact matches. Others @cite_12 @cite_38 use hashing techniques to group similar source code fragments together, thus enhancing the accuracy and performance of clone detection techniques. However, this is less effective in detecting Type 4 clones as hashing and fingerprints are based on the source code and not its semantic. Machine learning approaches have been proposed @cite_31 to link lexical level features with syntactic level features using semantic encoding techniques @cite_28 to improve Type 4 clone detection. However, in order for them to be effective, human experts need to analyze source code repositories to define features that are most relevant for clone detection. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_28",
"@cite_44",
"@cite_31",
"@cite_12"
],
"mid": [
"2104609444",
"2157532207",
"2511803001",
"2298313545",
"2584966780",
"2018986336"
],
"abstract": [
"While finding clones in source code has drawn considerable attention, there has been only very little work in finding similar fragments in binary code and intermediate languages, such as Java bytecode. Some recent studies showed that it is possible to find distinct sets of clone pairs in bytecode representation of source code, which are not always detectable at source code-level. In this paper, we present a bytecode clone detection approach, called SeByte, which exploits the benefits of compilers (the bytecode representation) for detecting a specific type of semantic clones in Java bytecode. SeByte is a hybrid metric-based approach that takes advantage of both, Semantic Web technologies and Set theory. We use a two-step analysis process: (1) Pattern matching via Semantic Web querying and reasoning, and (2) Content matching, using Jaccard coefficient for set similarity measurement. Semantic Web-based pattern matching helps us to find method blocks which share similar patterns even in case of extreme dissimilarity (e.g., numerous repetitions or large gaps). Although it leads to high recall, it gives high false positive rate. We thus use the content matching (via Jaccard) to reduce false positive rate by focusing on content semantic resemblance. Our evaluation of four Java systems and five other tools shows that SeByte can detect a large number of semantic clones that are either not detected or supported by source code based clone detectors.",
"Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.",
"Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.",
"Code duplication or copying a code fragment and then reuse by pasting with or without any modiflcations is a well known code smell in software maintenance. Several studies show that about 5 to 20 of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modiflcations. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research.",
"If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. In this paper, we present a machine learning framework to automatically detect clones in software, which is able to detect Types-3 and the most complicated kind of clones, Type-4 clones. Previously used traditional features are often weak in detecting the semantic clones The novel aspects of our approach are the extraction of features from abstract syntax trees (AST) and program dependency graphs (PDG), representation of a pair of code fragments as a vector and the use of classification algorithms. The key benefit of this approach is that our approach can find both syntactic and semantic clones extremely well. Our evaluation indicates that using our new AST and PDG features is a viable methodology, since they improve detecting clones on the IJaDataset 2.0.",
"Clone detection techniques essentially cluster textually, syntactically and or semantically similar code fragments in or across software systems. For large datasets, similarity identification is costly both in terms of time and memory, and especially so when detecting near-miss clones where lines could be modified, added and or deleted in the copied fragments. The capability and effectiveness of a clone detection tool mostly depends on the code similarity measurement technique it uses. A variety of similarity measurement approaches have been used for clone detection, including fingerprint based approaches, which have had varying degrees of success notwithstanding some limitations. In this paper, we investigate the effectiveness of simhash, a state of the art fingerprint based data similarity measurement technique for detecting both exact and near-miss clones in large scale software systems. Our experimental data show that simhash is indeed effective in identifying various types of clones in a software system despite wide variations in experimental circumstances. The approach is also suitable as a core capability for building other tools, such as tools for: incremental clone detection, code searching, and clone management."
]
} |
1907.11817 | 2954117579 | Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. | One way to capture the program semantic is the code Control Flow Graphs (CFGs) . CFGs are one of the intermediate code representations that describes in graph notation, all paths that might be followed through a piece of code during its execution @cite_34 . In CFGs, vertices represent basic blocks and edges (i.e., arcs) represent execution flow. Since CFGs capture syntactic and semantic features of the code, they are better at resisting changes in the code that manipulate source code in very minor ways, while not affecting the functionality of the program. For this reason, control flow graphs have been used in static analysis @cite_30 , fuzzing and test coverage tools @cite_8 , execution profiling @cite_33 @cite_24 , binary code analysis @cite_7 , malware analysis @cite_26 , and anomaly analysis @cite_41 . | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_41",
"@cite_24",
"@cite_34"
],
"mid": [
"1600009974",
"2125260159",
"95993446",
"50815725",
"2171929398",
"2126859103",
"2769473888",
"2124637344"
],
"abstract": [
"Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.",
"Several techniques have been developed for identifying similar code fragments in programs. These similar fragments, referred to as code clones, can be used to identify redundant code, locate bugs, or gain insight into program design. Existing scalable approaches to clone detection are limited to finding program fragments that are similar only in their contiguous syntax. Other, semantics-based approaches are more resilient to differences in syntax, such as reordered statements, related statements interleaved with other unrelated statements, or the use of semantically equivalent control structures. However, none of these techniques have scaled to real world code bases. These approaches capture semantic information from Program Dependence Graphs (PDGs), program representations that encode data and control dependencies between statements and predicates. Our definition of a code clone is also based on this representation: we consider program fragments with isomorphic PDGs to be clones. In this paper, we present the first scalable clone detection algorithm based on this definition of semantic clones. Our insight is the reduction of the difficult graph similarity problem to a simpler tree similarity problem by mapping carefully selected PDG subgraphs to their related structured syntax. We efficiently solve the tree similarity problem to create a scalable analysis. We have implemented this algorithm in a practical tool and performed evaluations on several million-line open source projects, including the Linux kernel. Compared with previous approaches, our tool locates significantly more clones, which are often more semantically interesting than simple copied and pasted code fragments.",
"characterized in terms of properties of Rule Graphs. We show that, unfortunately, also the RG is ambiguous with respect to the answer set semantics, while the EDG is isomorphic to the program it represents. We argue that the reason of this drawback of the RG as a software engineering tool relies in the absence of a distinction between the different kinds of connections between cycles. Finally, we suggest that properties of a program might be characterized(andchecked)intermsofadmissiblecolorings of the EDG.",
"Verification using static analysis often hinges on precise numeric invariants. Numeric domains of infinite height can infer these invariants, but require widening narrowing which complicates the fixpoint computation and is often too imprecise. As a consequence, several strategies have been proposed to prevent a precision loss during widening or to narrow in a smarter way. Most of these strategies are difficult to retrofit into an existing analysis as they either require a pre-analysis, an on-the-fly modification of the CFG, or modifications to the fixpoint algorithm. We propose to encode widening and its various refinements from the literature as cofibered abstract domains that wrap standard numeric domains, thereby providing a modular way to add numeric analysis to any static analysis, that is, without modifying the fixpoint engine. Since these domains cannot make any assumptions about the structure of the program, our approach is suitable to the analysis of executables, where the (potentially irreducible) CFG is re-constructed on-the-fly. Moreover, our domain-based approach not only mirrors the precision of more intrusive approaches in the literature but also requires fewer iterations to find a fixpoint of loops than many heuristics that merely aim for precision.",
"Many classic and emerging security attacks usually introduce illegal control flow to victim programs. This paper proposes an approach to detecting violation of control flow integrity based on hardware support for performance monitoring in modern processors. The key observation is that the abnormal control flow in security breaches can be precisely captured by performance monitoring units. Based on this observation, we design and implement a system called CFIMon, which is the first non-intrusive system that can detect and reason about a variety of attacks violating control flow integrity without any changes to applications (either source or binary code) or requiring special-purpose hardware. CFIMon combines static analysis and runtime training to collect legal control flow transfers, and leverages the branch tracing store mechanism in commodity processors to collect and analyze runtime traces on-the-fly to detect violation of control flow integrity. Security evaluation shows that CFIMon has low false positives or false negatives when detecting several realistic security attacks. Performance results show that CFIMon incurs only 6.1 performance overhead on average for a set of typical server applications.",
"This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.",
"We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).",
"Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (, 2011) reaching 91.2 accuracy with 14 error reduction."
]
} |
1907.11845 | 2964526184 | Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text. | : The GANs proposed on 2019 @cite_1 aims at generating realistic images of handwritten texts, which is naturally a fit for Optical Character Recognition (OCR). The authors use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and then feed it to the generator network. They also modify the standard GAN by adding an auxiliary network for text recognition. However, since this approach can not directly synthesize handwritten text of digital ink, although its generated images are realistic, an additional effective Ink Grab algorithm is further required for the conversion from image to digital stroke. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2920553990"
],
"abstract": [
"State-of-the-art offline handwriting text recognition systems tend to use neural networks and therefore require a large amount of annotated data to be trained. In order to partially satisfy this requirement, we propose a system based on Generative Adversarial Networks (GAN) to produce synthetic images of handwritten words. We use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and we feed it to the generator network. We also modify the standard GAN by adding an auxiliary network for text recognition. The system is then trained with a balanced combination of an adversarial loss and a CTC loss. Together, these extensions to GAN enable to control the textual content of the generated word images. We obtain realistic images on both French and Arabic datasets, and we show that integrating these synthetic images into the existing training data of a text recognition system can slightly enhance its performance."
]
} |
1907.11845 | 2964526184 | Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text. | : Alex Grave proposed an RNN based generator model to mimic handwriting data @cite_12 , referred as throughout the whole paper. For each timestamp, encodes prefix sampled path to produce a set of parameters of a probability distribution of next stroke point, then sample the next stroke point given this distribution. There are two variants of , i.e., handwritten predictor and handwritten synthesizer, where the later one has the capability to synthesize handwritten text for given text. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2573871018"
],
"abstract": [
"Recently, we propose deep neural network based hidden Markov models (DNN-HMMs) for offline handwritten Chinese text recognition. In this study, we design a novel writer code based adaptation on top of the DNN-HMM to further improve the accuracy via a customized recognizer. The writer adaptation is implemented by incorporating the new layers with the original input or hidden layers of the writer-independent DNN. These new layers are driven by the so-called writer code, which guides and adapts the DNN-based recognizer with the writer information. In the training stage, the writer-aware layers are jointly learned with the conventional DNN layers in an alternative manner. In the recognition stage, with the initial recognition results from the first-pass decoding with the writer-independent DNN, an unsupervised adaptation is performed to generate the writer code via the cross-entropy criterion for the subsequent second-pass decoding. The experiments on the most challenging task of ICDAR 2013 Chinese handwriting competition show that our proposed adaptation approach can achieve consistent and significant improvements of recognition accuracy over a highperformance writer-independent DNN-HMM based recognizer across all 60 writers, yielding a relative character error rate reduction of 23.62 in average."
]
} |
1907.11836 | 2966726302 | Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying. | Without any occupation of uplink bandwidth resources, @cite_31 and @cite_6 estimated downlink CSI from uplink CSI by using DL approach. In @cite_31 , the core idea was that since the same propagating environment was shared for both uplink and downlink channels, the environment information could be applied to downlink channel cases after it was extracted from uplink channel response. Similar to @cite_31 , a NN-based scheme for extrapolating downlink CSI from observed uplink CSI has been proposed in @cite_6 , where the underlying physical relation between the downlink and uplink frequency bands was exploited to construct the learning architecture. Need to mention that, the methods in @cite_31 usually needs to retrain the NN when the environment information changes significantly. For example, for a well-trained equipment, its extracted environment information (e.g., the shapes of buildings, streets and mountains, the materials that objects are made up, etc) from one city would no longer be applicable for another. The method in @cite_6 will encounter poor CSI recovery performance in the environment of wide band interval between downlink and uplink frequency bands. | {
"cite_N": [
"@cite_31",
"@cite_6"
],
"mid": [
"2904192264",
"2109711397"
],
"abstract": [
"Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.",
"In closed-loop FDD MIMO system, downlink channel state information (DL-CSI) is usually feedback to base station in forms of codebook or CQI, both of which aim at lowering the feedback quantity at the cost of limited feedback precision and heavy processing complexity at mobile side. Meanwhile, the recently proposed direct channel feedback method incurs great system overhead due to its exclusive occupation of uplink bandwidth resources. We propose a low-cost feedback method for DL-CSI, which spreads unquantized and uncoded DL-CSI and superimposes it onto uplink user data sequences (UL-US). Exclusive occupation of system resources by DL-CSI can thus be avoided. Due to spreading, DL-CSI can be estimated accurately with little power allocation at the cost of some UL-US's SER performance"
]
} |
1907.11836 | 2966726302 | Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying. | As a whole, the DL-based and SC-based CSI feedback methods still face huge challenge, which can be summarized as follows: Concentrated on feedback reduction, the DL-based CSI feedback methods, e.g., the methods in @cite_34 -- @cite_10 , inevitably occupy uplink bandwidth resources. Although the occupation of uplink bandwidth resources can be avoided, the methods that estimate downlink CSI from uplink CSI in @cite_31 and @cite_6 usually limit the applications in mobile or wide frequency-band interval environment. The SC-based CSI feedback @cite_21 can also avoid the occupation of uplink bandwidth resources, while facing with huge challenge to cancel the interference between downlink CSI and UL-US due to the lack of good solutions in previous works. | {
"cite_N": [
"@cite_21",
"@cite_6",
"@cite_31",
"@cite_34",
"@cite_10"
],
"mid": [
"2109711397",
"2904192264",
"1987954156",
"2963145597",
"2908955555"
],
"abstract": [
"In closed-loop FDD MIMO system, downlink channel state information (DL-CSI) is usually feedback to base station in forms of codebook or CQI, both of which aim at lowering the feedback quantity at the cost of limited feedback precision and heavy processing complexity at mobile side. Meanwhile, the recently proposed direct channel feedback method incurs great system overhead due to its exclusive occupation of uplink bandwidth resources. We propose a low-cost feedback method for DL-CSI, which spreads unquantized and uncoded DL-CSI and superimposes it onto uplink user data sequences (UL-US). Exclusive occupation of system resources by DL-CSI can thus be avoided. Due to spreading, DL-CSI can be estimated accurately with little power allocation at the cost of some UL-US's SER performance",
"Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.",
"In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.",
"In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. However, such a transmission is hindered by excessive feedback overhead. In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain.",
"A major obstacle for widespread deployment of frequency division duplex (FDD)-based Massive multiple-input multiple-output (MIMO) communications is the large signaling overhead for reporting full downlink (DL) channel state information (CSI) back to the basestation (BS), in order to enable closed-loop precoding. We completely remove this overhead by a deep-learning based channel extrapolation (or \"prediction\") approach and demonstrate that a neural network (NN) at the BS can infer the DL CSI centered around a frequency @math by solely observing uplink (UL) CSI on a different, yet adjacent frequency band around @math ; no more pilot reporting overhead is needed than with a genuine time division duplex (TDD)-based system. The rationale is that scatterers and the large-scale propagation environment are sufficiently similar to allow a NN to learn about the physical connections and constraints between two neighboring frequency bands, and thus provide a well-operating system even when classic extrapolation methods, like the Wiener filter (used as a baseline for comparison throughout) fails. We study its performance for various state-of-the-art Massive MIMO channel models, and, even more so, evaluate the scheme using actual Massive MIMO channel measurements, rendering it to be practically feasible at negligible loss in spectral efficiency when compared to a genuine TDD-based system."
]
} |
1907.11770 | 2965569712 | In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents. | Error analysis has played an important role in computer vision research such as object detection @cite_41 and VQA @cite_14 . Although many learning-based methods have recently been proposed for navigation @cite_11 @cite_3 @cite_2 @cite_29 @cite_33 , there has been little work focused on error analysis of state-of-the-art methods. The closest to ours is the concurrent works by Mishkin al @cite_34 and Savva al @cite_13 , who bench-marked learned agents against classical ones in indoor simulators. Our work shares similarity in comparing learned and classical agents, but is different in that we propose new metrics to diagnose various aspects of navigation capability including collision, avoidance, memory management, and exploitation of available information. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_41",
"@cite_29",
"@cite_3",
"@cite_2",
"@cite_34",
"@cite_13",
"@cite_11"
],
"mid": [
"2110405746",
"2130155248",
"2963272646",
"2949153416",
"2051349034",
"2767506186",
"2769282630",
"1964806982",
"2031397756"
],
"abstract": [
"Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 .",
"This paper describes a technique for multi-agent exploration of an unknown environment, that improves the quality of the map by reducing the inaccuracies that occur over time from dead reckoning errors. We present an algorithmic solution, simulation results, as well as a cost analysis and experimental data. The approach is based on using a pair of robots that observe one another’s behaviour, thus greatly reducing odometry errors. We assume the robots can both directly sense nearby obstacles and see one another. We have implemented both these capabilities with actual robots in our lab. By exploiting the ability of the robots to see one another, we can detect opaque obstacles in the environment independent of their surface reflectance properties. 1",
"In this paper, we present a novel, general, and efficient architecture for addressing computer vision problems that are approached from an 'Analysis by Synthesis' standpoint. Analysis by synthesis involves the minimization of reconstruction error, which is typically a non-convex function of the latent target variables. State-of-the-art methods adopt a hybrid scheme where discriminatively trained predictors like Random Forests or Convolutional Neural Networks are used to initialize local search algorithms. While these hybrid methods have been shown to produce promising results, they often get stuck in local optima. Our method goes beyond the conventional hybrid architecture by not only proposing multiple accurate initial solutions but by also defining a navigational structure over the solution space that can be used for extremely efficient gradient-free local search. We demonstrate the efficacy and generalizability of our approach on tasks as diverse as Hand Pose Estimation, RGB Camera Relocalization, and Image Retrieval.",
"As humans we possess an intuitive ability for navigation which we master through years of practice; however existing approaches to model this trait for diverse tasks including monitoring pedestrian flow and detecting abnormal events have been limited by using a variety of hand-crafted features. Recent research in the area of deep-learning has demonstrated the power of learning features directly from the data; and related research in recurrent neural networks has shown exemplary results in sequence-to-sequence problems such as neural machine translation and neural image caption generation. Motivated by these approaches, we propose a novel method to predict the future motion of a pedestrian given a short history of their, and their neighbours, past behaviour. The novelty of the proposed method is the combined attention model which utilises both \"soft attention\" as well as \"hard-wired\" attention in order to map the trajectory information from the local neighbourhood to the future positions of the pedestrian of interest. We illustrate how a simple approximation of attention weights (i.e hard-wired) can be merged together with soft attention weights in order to make our model applicable for challenging real world scenarios with hundreds of neighbours. The navigational capability of the proposed method is tested on two challenging publicly available surveillance databases where our model outperforms the current-state-of-the-art methods. Additionally, we illustrate how the proposed architecture can be directly applied for the task of abnormal event detection without handcrafting the features.",
"In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of system's observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. This framework is applicable to several variants of the VINS problem such as visual simultaneous localization and mapping (V-SLAM), as well as visual-inertial odometry using the multi-state constraint Kalman filter (MSC-KF). Our analysis, along with the proposed method to reduce inconsistency, are extensively validated with simulation trials and real-world experimentation.",
"Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.",
"We present an interpretable framework for path prediction that leverages dependencies between agents’ behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents’ trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents’ behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net’s ability to generalize to unseen scenes.",
"Automatic analysis and understanding of common activities and detection of deviant behaviors is a challenging task in computer vision. This is particularly true in surveillance data, where busy traffic scenes are rich with multifarious activities many of them occurring simultaneously. In this paper, we address these issues with an unsupervised learning approach relying on probabilistic Latent Semantic Analysis (pLSA) applied to a rich set visual features including motion and size activities for discovering relevant activity patterns occurring in such scenes. We then show how the discovered patterns can directly be used to segment the scene into regions with clear semantic activity content. Furthermore, we introduce novel abnormality detection measures within the scope of the adopted modeling approach, and investigate in detail their performance with respect to various issues. Experiments on 45 minutes of video captured from a busy traffic scene and involving abnormal events are conducted.",
"This paper examines techniques for finding falsifying trajectories of hybrid systems using an approach that we call trajectory splicing. Many formal verification techniques for hybrid systems, including flowpipe construction, can identify plausible abstract counterexamples for property violations. However, there is often a gap between the reported abstract counterexamples and the concrete system trajectories. Our approach starts with a candidate sequence of disconnected trajectory segments, each segment lying inside a discrete mode. However, such disconnected segments do not form concrete violations due to the gaps that exist between the ending state of one segment and the starting state of the subsequent segment. Therefore, trajectory splicing uses local optimization to minimize the gap between these segments, effectively splicing them together to form a concrete trajectory. We demonstrate the use of our approach for falsifying safety properties of hybrid systems using standard optimization techniques. As such, our approach is not restricted to linear systems. We compare our approach with other falsification approaches including uniform random sampling and a robustness guided falsification approach used in the tool S-Taliro. Our preliminary evaluation clearly shows the potential of our approach to search for candidate trajectory segments and use them to find concrete property violations."
]
} |
1907.11770 | 2965569712 | In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents. | Another line of research follows a more module approach by developing learning-based navigation modules which can be integrated to a larger network. For example, localization can be formulated as a 3-DOF or 6-DOF camera pose estimation problem and performed by a deep network @cite_22 @cite_7 @cite_21 . Learning-based approaches have also been studied in the context of SLAM @cite_36 @cite_18 @cite_25 . Most relevant to our work, Tamar al propose Value Iteration Network (VIN) @cite_43 as a differential planner, and Gupta al integrate VIN with a differential mapper and propose CMP in @cite_28 , an end-to-end mapper-planner which we analyze as the state-of-the-art method with specially designed components. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_28",
"@cite_21",
"@cite_43",
"@cite_25"
],
"mid": [
"2909119029",
"2909955272",
"2800595980",
"2948138929",
"2258731934",
"2892197942",
"2745859992",
"2056358962"
],
"abstract": [
"This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.",
"As simultaneous localization and mapping (SLAM) techniques have flourished with the advent of 3D Light Detection and Ranging (LiDAR) sensors, accurate 3D maps are readily available. Many researchers turn their attention to localization in a previously acquired 3D map. In this paper, we propose a novel and lightweight camera-only visual positioning algorithm that involves localization within prior 3D LiDAR maps. We aim to achieve the consumer level global positioning system (GPS) accuracy using vision within the urban environment, where GPS signal is unreliable. Via exploiting a stereo camera, depth from the stereo disparity map is matched with 3D LiDAR maps. A full six degree of freedom (DOF) camera pose is estimated via minimizing depth residual. Powered by visual tracking that provides a good initial guess for the localization, the proposed depth residual is successfully applied for camera pose estimation. Our method runs online, as the average localization error is comparable to ones resulting from state-of-the-art approaches. We validate the proposed method as a stand-alone localizer using KITTI dataset and as a module in the SLAM framework using our own dataset.",
"In this paper, we propose a novel robocentric formulation of the visual-inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual-inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a 6-axis IMU. The key idea is to deliberately reformulate the VINS with respect to a moving local frame, rather than a fixed global frame of reference as in the standard world-centric VINS, in order to obtain relative motion estimates of higher accuracy for updating global poses. As an immediate advantage of this robocentric formulation, the proposed R-VIO can start from an arbitrary pose, without the need to align the initial orientation with the global gravitational direction. More importantly, we analytically show that the linearized robocentric VINS does not undergo the observability mismatch issue as in the standard world-centric counterpart which was identified in the literature as the main cause of estimation inconsistency. Additionally, we investigate in-depth the special motions that degrade the performance in the world-centric formulation and show that such degenerate cases can be easily compensated in the proposed robocentric formulation, without resorting to additional sensors as in the world-centric formulation, thus leading to better robustness. The proposed R-VIO algorithm has been extensively tested through both Monte Carlo simulations and real-world experiments with different sensor platforms navigating in different environments, and shown to achieve better (or competitive at least) performance than the state-of-the-art VINS, in terms of consistency, accuracy and efficiency.",
"We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.",
"We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.",
"This paper presents an unsupervised deep learning framework called UnDEMoN for estimating dense depth map and 6-DoF camera pose information directly from monocular images. The proposed network is trained using unlabeled monocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. These improvements are achieved by introducing a new objective function that aims to minimize spatial as well as temporal reconstruction losses simultaneously. These losses are defined using bi-linear sampling kernel and penalized using the Charbonnier penalty function. The objective function, thus created, provides robustness to image gradient noises thereby improving the overall estimation accuracy without resorting to any coarse to fine strategies which are currently prevalent in the literature. Another novelty lies in the fact that we combine a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6 DOF Camera pose and superior depth map. The effectiveness of the proposed approach is demonstrated through performance comparison with the existing supervised and unsupervised methods on the KITTI driving dataset.",
"One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https: github.com HKUST-Aerial-Robotics VINS-Mono ) and iOS mobile devices ( https: github.com HKUST-Aerial-Robotics VINS-Mobile ).",
"In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors. We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of extended Kalman filter (EKF)-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the multi-state-constraint Kalman filter (MSCKF)) attains better accuracy, consistency, and computational efficiency than the simultaneous localization and mapping (SLAM) formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF's linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte Carlo simulations and real-world testing."
]
} |
1907.11653 | 2965308893 | The prediction of electrical power in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power output can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power output as a function of these environmental conditions in order to maximize the profit. The research community has solved this problem by applying machine learning techniques and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms in which data is arriving continuously and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and performance to be applied on this streaming scenario. | Regarding the SL topic, many researches have focused on it due to its mentioned relevance, such as @cite_3 @cite_57 @cite_36 @cite_44 @cite_25 , and more recently in @cite_28 @cite_51 @cite_30 @cite_5 . The application of regression techniques to SL has been recently addressed in @cite_16 , where the authors cover the most important online regression methods. The work @cite_37 deals with ensemble learning from data streams, and concretely it focused on regression ensembles. The authors of @cite_4 propose several criteria for efficient sample selection in case of SL regression problems within an online active learning context. In general, we can say that regression tasks in SL have not received as much attention as classification tasks, and this was spotlighted in @cite_21 , where researchers carried out an study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_36",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_57",
"@cite_44",
"@cite_5",
"@cite_16",
"@cite_51",
"@cite_25"
],
"mid": [
"2073427650",
"2491694318",
"2142057089",
"2952662639",
"2208030666",
"2604735854",
"2047291883",
"2964047892",
"2127459401",
"1599871777",
"2953258117",
"2026158992",
"1914856875"
],
"abstract": [
"Abstract The emergence of ubiquitous sources of streaming data has given rise to the popularity of algorithms for online machine learning. In that context, Hoeffding trees represent the state-of-the-art algorithms for online classification. Their popularity stems in large part from their ability to process large quantities of data with a speed that goes beyond the processing power of any other streaming or batch learning algorithm. As a consequence, Hoeffding trees have often been used as base models of many ensemble learning algorithms for online classification. However, despite the existence of many algorithms for online classification, ensemble learning algorithms for online regression do not exist. In particular, the field of online any-time regression analysis seems to have experienced a serious lack of attention. In this paper, we address this issue through a study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter. We also design, implement and evaluate two novel ensemble learning methods for online regression: online bagging with Hoeffding-based model trees, and an online RandomForest method in which we have used a randomized version of the online model tree learning algorithm as a basic building block. Within the study presented in this paper, we evaluate the proposed algorithms along several dimensions: predictive accuracy and quality of models, time and memory requirements, bias–variance and bias–variance–covariance decomposition of the error, and responsiveness to concept drift.",
"We study resource-limited online learning, motivated by the problem of conditional-branch outcome prediction in computer architecture. In particular, we consider (parallel) time and space-efficient ensemble learners for online settings, empirically demonstrating benefits similar to those shown previously for offline ensembles. Our learning algorithms are inspired by the previously published “boosting by filtering” framework as well as the offline Arc-x4 boosting-style algorithm. We train ensembles of online decision trees using a novel variant of the ID4 online decision-tree algorithm as the base learner, and show empirical results for both boosting and bagging-style online ensemble methods. Our results evaluate these methods on both our branch prediction domain and online variants of three familiar machine-learning benchmarks. Our data justifies three key claims. First, we show empirically that our extensions to ID4 significantly improve performance for single trees and additionally are critical to achieving performance gains in tree ensembles. Second, our results indicate significant improvements in predictive accuracy with ensemble size for the boosting-style algorithm. The bagging algorithms we tried showed poor performance relative to the boosting-style algorithm (but still improve upon individual base learners). Third, we show that ensembles of small trees are often able to outperform large single trees with the same number of nodes (and similarly outperform smaller ensembles of larger trees that use the same total number of nodes). This makes online boosting particularly useful in domains such as branch prediction with tight space restrictions (i.e., the available real-estate on a microprocessor chip).",
"Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split.",
"In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. Then we analyze the drawbacks of the indirect regression, which the recent state-of-the-art detection structures like Faster-RCNN and SSD follows, for multi-oriented scene text detection, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial for localizing incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure of 81 , which is a new state-of-the-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.",
"In this paper, we treat the problem of continuous pose estimation for object categories as a regression problem on the basis of only 2D training information. While regression is a natural framework for continuous problems, regression methods so far achieved inferior results with respect to 3D-based and 2D-based classification-and-refinement approaches. This may be attributed to their weakness to high intra-class variability as well as to noisy matching procedures and lack of geometrical constraints. We propose to apply regression to Fisher-encoded vectors computed from large cells by learning an array of Fisher regressors. Fisher encoding makes our algorithm flexible to variations in class appearance, while the array structure permits to indirectly introduce spatial context information in the approach. We formulate our problem as a MAP inference problem, where the likelihood function is composed of a generative term based on the prediction error generated by the ensemble of Fisher regressors as well as a discriminative term based on SVM classifiers. We test our algorithm on three publicly available datasets that envisage several difficulties, such as high intra-class variability, truncations, occlusions, and motion blur, obtaining state-of-the-art results.",
"In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. In the context of multioriented scene text detection, we analyze the drawbacks of indirect regression, which covers the state-of-the-art detection structures Faster-RCNN and SSD as instances, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial to localize incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F-measure of 81 , which is a new state-ofthe-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.",
"Head pose estimation from images has recently attracted much attention in computer vision due to its diverse applications in face recognition, driver monitoring and human computer interaction. Most successful approaches to head pose estimation formulate the problem as a nonlinear regression between image features and continuous 3D angles (i.e. yaw, pitch and roll). However, regression-like methods suffer from three main drawbacks: (1) They typically lack generalization and overfit when trained using a few samples. (2) They fail to get reliable estimates over some regions of the output space (angles) when the training set is not uniformly sampled. For instance, if the training data contains under-sampled areas for some angles. (3) They are not robust to image noise or occlusion. To address these problems, this paper presents Supervised Local Subspace Learning (SL2), a method that learns a local linear model from a sparse and non-uniformly sampled training set. SL2 learns a mixture of local tangent spaces that is robust to under-sampled regions, and due to its regularization properties it is also robust to over-fitting. Moreover, because SL2 is a generative model, it can deal with image noise. Experimental results on the CMU Multi-PIE and BU-3DFE database show the effectiveness of our approach in terms of accuracy and computational complexity.",
"The standard linear regression (SLR) problem is to recover a vector x0 from noisy linear observations y = Ax0 + w. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its periteration behavior is rigorously characterized by a scalar stateevolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a “vector AMP” (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verdu, and Shamai.",
"We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practical limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n^2). Low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p^2 n), where p is the rank of the approximation. The practicality of such methods thus depends on the required rank p. In this paper, we show that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such methods, and is often seen as the implicit number of parameters of non-parametric estimators. This result enables simple algorithms that have sub-quadratic running time complexity, but provably exhibit the same predictive performance than existing algorithms, for any given problem instance, and not only for worst-case situations.",
"Breiman (2001a,b) has recently developed an ensemble classification and regression approach that displayed outstanding performance with regard prediction error on a suite of benchmark datasets. As the base constituents of the ensemble are tree-structured predictors, and since each of these is constructed using an injection of randomness, the method is called ‘random forests’. That the exceptional performance is attained with seemingly only a single tuning parameter, to which sensitivity is minimal, makes the methodology all the more remarkable. The individual trees comprising the forest are all grown to maximal depth. While this helps with regard bias, there is the familiar tradeoff with variance. However, these variability concerns were potentially obscured because of an interesting feature of those benchmarking datasets extracted from the UCI machine learning repository for testing: all these datasets are hard to overfit using tree-structured methods. This raises issues about the scope of the repository. With this as motivation, and coupled with experience from boosting methods, we revisit the formulation of random forests and investigate prediction performance on real-world and simulated datasets for which maximally sized trees do overfit. These explorations reveal that gains can be realized by additional tuning to regulate tree size via limiting the number of splits and or the size of nodes for which splitting is allowed. Nonetheless, even in these settings, good performance for random forests can be attained by using larger (than default) primary tuning parameter values.",
"Regression based methods are not performing as well as detection based methods for human pose estimation. A central problem is that the structural information in the pose is not well exploited in the previous regression methods. In this work, we propose a structure-aware regression approach. It adopts a reparameterized pose representation using bones instead of joints. It exploits the joint connection structure to define a compositional loss function that encodes the long range interactions in the pose. It is simple, effective, and general for both 2D and 3D pose estimation in a unified setting. Comprehensive evaluation validates the effectiveness of our approach. It significantly advances the state-of-the-art on Human3.6M and is competitive with state-of-the-art results on MPII.",
"In this paper, we formally present a novel estimation method, referred to as the Stochastic Learning Weak Estimator (SLWE), which yields the estimate of the parameters of a binomial distribution, where the convergence of the estimate is weak, i.e. with regard to the first and second moments. The estimation is based on the principles of stochastic learning. The mean of the final estimate is independent of the scheme's learning coefficient, @l, and both the variance of the final distribution and the speed decrease with @l. Similar results are true for the multinomial case, except that the equations transform from being of a scalar type to be of a vector type. Amazingly enough, the speed of the latter only depends on the same parameter, @l, which turns out to be the only non-unity eigenvalue of the underlying stochastic matrix that determines the time-dependence of the estimates. An empirical analysis on synthetic data shows the advantages of the scheme for non-stationary distributions. The paper also briefly reports (without detailed explanation) conclusive results that demonstrate the superiority of SLWE in pattern-recognition-based data compression, where the underlying data distribution is non-stationary. Finally, and more importantly, the paper includes the results of two pattern recognition exercises, the first of which involves artificial data, and the second which involves the recognition of the types of data that are present in news reports of the Canadian Broadcasting Corporation (CBC). The superiority of the SLWE in both these cases is demonstrated.",
"While the study of the connection between discourse patterns and personal identification is decades old, the study of these patterns using language technologies is relatively recent. In that more recent tradition we frame author age prediction from text as a regression problem. We explore the same task using three very different genres of data simultaneously: blogs, telephone conversations, and online forum posts. We employ a technique from domain adaptation that allows us to train a joint model involving all three corpora together as well as separately and analyze differences in predictive features across joint and corpus-specific aspects of the model. Effective features include both stylistic ones (such as POS patterns) as well as content oriented ones. Using a linear regression model based on shallow text features, we obtain correlations up to 0.74 and mean absolute errors between 4.1 and 6.8 years."
]
} |
1907.11752 | 2965643485 | Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem. | A previous attempt to formalize Decision Theory in the presence of Causal Information is given in @cite_53 , @cite_70 . According to such formulation, a decision maker must choose whatever action is more likely to (causally) produce desired outcomes while keeping any beliefs about causal relations fixed ( @cite_4 ). This is stated by the Stalnaker ( @cite_59 ) equation where @math is to be read as @math @math ( @cite_41 , @cite_11 ). Lewis' and Joyce's work captured the intuition that causal relations may be used to control the environment and to predict what is caused by the actions of a decision maker. In Section we refine the @math operator by an explicit way of calculating the probability of causing an outcome by doing a certain action in terms of Pearl's do-calculus. | {
"cite_N": [
"@cite_4",
"@cite_70",
"@cite_41",
"@cite_53",
"@cite_59",
"@cite_11"
],
"mid": [
"1480413091",
"2542071751",
"1850984366",
"1961009203",
"2064134784",
"2321919568"
],
"abstract": [
"We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.",
"Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are @math observed contexts and @math arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality @math ( @math ). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the @math mean reward matrix @math (for each context in @math and each arm in @math ) factorizes into non-negative factors @math ( @math ) and @math ( @math ). This insight enables us to propose an @math -greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of @math at time @math , as compared to @math for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of @math . These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.",
"The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X,Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: methods based on Additive Noise Models (ANMs) and Information Geometric Causal Inference (IGCI). We present the benchmark CAUSEEFFECTPAIRS that consists of data for 100 different causee ffect pairs selected from 37 data sets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the \"ground truth\" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the method based on Additive Noise Models that has originally been proposed by (2009), which obtains an accuracy of 63 ± 10 and an AUC of 0.74 ± 0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.",
"Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.",
"In this paper we present a language to reason about actions in a probabilistic setting and compare our work with earlier work by Pearl.The main feature of our language is its use of static and dynamic causal laws, and use of unknown (or background) variables - whose values are determined by factors beyond our model - in incorporating probabilities. We use two kind of unknown variables: inertial and non-inertial. Inertial unknown variables are helpful in assimilating observations and modeling counterfactuals and causality; while non-inertial unknown variables help characterize stochastic behavior, such as the outcome of tossing a coin, that are not impacted by observations. Finally, we give a glimpse of incorporating probabilities into reasoning with narratives.",
"Causal reasoning plays an essential role in both informal and formal human decision-making. Causality itself as well as human understanding of causality is imprecise, sometimes necessarily so. A common sense understanding of the world tells us that we have to deal with imprecision, uncertainty and imperfect knowledge. A difficulty is striking a good balance between precise formalism and commonsense imprecise reality. An algorithmic method of accommodating imprecision in causality is needed. Today, data mining holds the promise of extracting unsuspected information from very large databases. However, the most common data mining rule forms do not express a causal relationship. Without understanding the underlying causality, a naive use of data mining rules can lead to undesirable actions."
]
} |
1907.11752 | 2965643485 | Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem. | @cite_43 provides a framework for defining the notions of cause and effect in terms of decision theoretical concepts, such as states and outcomes and gives a theoretical basis for graphical description of causes and effects, such as causal influence diagrams ( @cite_54 ). Heckerman gave an elegant definition of causality, but did not addressed how to actually make choices using causal information. | {
"cite_N": [
"@cite_43",
"@cite_54"
],
"mid": [
"1480413091",
"1850984366"
],
"abstract": [
"We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.",
"The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X,Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: methods based on Additive Noise Models (ANMs) and Information Geometric Causal Inference (IGCI). We present the benchmark CAUSEEFFECTPAIRS that consists of data for 100 different causee ffect pairs selected from 37 data sets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the \"ground truth\" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the method based on Additive Noise Models that has originally been proposed by (2009), which obtains an accuracy of 63 ± 10 and an AUC of 0.74 ± 0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method."
]
} |
1907.11703 | 2966816175 | Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game. | Safe Reinforcement Learning tries to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes @cite_30 . Roughly, there are two ways of doing safe RL: some methods adapt the optimality criterion, while others adapt the exploration mechanism. Our work uses continuous action guidance of lookahead search with MCTS for better exploration. | {
"cite_N": [
"@cite_30"
],
"mid": [
"1845972764"
],
"abstract": [
"Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning."
]
} |
1907.11703 | 2966816175 | Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game. | Approaches such as DAgger @cite_31 formulate imitation learning as a supervised problem where the aim is to match the demonstrator performance. However, the performance of agents using these methods is upper-bounded by the demonstrator. Recent works such as Expert Iteration @cite_2 and AlphaGo Zero @cite_4 extend imitation learning to the RL setting where the demonstrator is also continuously improved during training. There has been a growing body of work on imitation learning where demonstrators' data is used to speed up policy learning in RL @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_31",
"@cite_4",
"@cite_2"
],
"mid": [
"2804930149",
"2767506186",
"2735089625",
"2802726207"
],
"abstract": [
"Imitation learning (IL) consists of a set of tools that leverage expert demonstrations to quickly learn policies. However, if the expert is suboptimal, IL can yield policies with inferior performance compared to reinforcement learning (RL). In this paper, we aim to provide an algorithm that combines the best aspects of RL and IL. We accomplish this by formulating several popular RL and IL algorithms in a common mirror descent framework, showing that these algorithms can be viewed as a variation on a single approach. We then propose LOKI, a strategy for policy learning that first performs a small but random number of IL iterations before switching to a policy gradient RL method. We show that if the switching time is properly randomized, LOKI can learn to outperform a suboptimal expert and converge faster than running policy gradient from scratch. Finally, we evaluate the performance of LOKI experimentally in several simulated environments.",
"Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.",
"Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.",
"Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a well-studied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available."
]
} |
1907.11703 | 2966816175 | Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game. | ( hester2017deep ) used demonstrator data by combining the supervised learning loss with the Q-learning loss within the DQN algorithm to pre-train and showed that their method achieves good results on Atari games by using a few minutes of game-play data. ( kim2013learning ) proposed a learning from demonstration approach where limited demonstrator data is used to impose constraints on the policy iteration phase. Another recent work @cite_1 used planner demonstrations to learn a value function, which was then further refined with RL and a short-horizon planner for robotic manipulation tasks. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2788862220"
],
"abstract": [
"Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN."
]
} |
1907.11703 | 2966816175 | Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game. | Previous work @cite_19 combined planning and RL in such a way that RL can explore on the action space filtered down by the planner, outperforming using either solely a planner or RL. Other work @cite_23 employed MCTS as a high-level planner, which is fed a set of low-level offline learned DRL policies and refines them for safer execution within a simulated autonomous driving domain. A recent work by ( vodopivec2017monte ) unified RL, planning, and search. | {
"cite_N": [
"@cite_19",
"@cite_23"
],
"mid": [
"2837605352",
"2778917778"
],
"abstract": [
"Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy by reinforcement learning in the high-fidelity simulator, which performs better than supervised imitation learning.",
"Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved wide-spread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search."
]
} |
1907.11717 | 2966432599 | The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay. | Most existing access control schemes for secure contents are application specific or lack security strength. For example, in @cite_0 , the authors presented a scheme for protected contents using network coding as encryption. However, the scheme requires a private connection between the publisher and consumer to obtain the decoding matrix and missing data blocks. @cite_25 , the authors presented a security framework for the copyrighted video streaming in ICN based on linear random coding. It is proven that the linear random coding alone improves the performance of ICN @cite_26 . However in @cite_25 , each video was encrypted with a large number of symmetric encryption keys, such that each video frame was encrypted with a unique symmetric encryption key. Since only authorized users who possessed the set of all keys could decrypt the video content, the distribution of a large number of keys for each video content can be an extra communication overhead. | {
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_25"
],
"mid": [
"2514042371",
"2115677140",
"1567993328"
],
"abstract": [
"As a novel network architecture, Information-Centric Networking(ICN) has a good performance in security, mobility and scalability. Although in-network cache used in ICN can effectively solve the problem of network congestion. Meanwhile, it also brings a lot of challenges such as copyright protection. How to prevent unauthorized user access to the large-sized contents of the route has become the focus of the research. Some of the current solutions are based on the traditional encryption technology. But these solutions don't applay to ICN. Current approaches that rely on a common encryption key among authorized users cannot protect copyright well since if authorized user leaks the private key out, we cannot tell who has leaked the key out. In this paper, we use a novel scheme to solve the problem of copyright protection. In this scheme, we take the method of the network encoding. The first, we have splitted the large-sized content into N blocks. Then through linear network coding(LNC) encrypted the content, if the user can not obtain the decrypted matrix, the user will not be able to decrypt the content. Therefore, the scheme can achieve the protection of the content. Our analysis of this program shows that the scheme has a good performance in copyright protection.",
"In this paper, we describe a configurable content-based MPEG video authentication scheme, which is robust to typical video transcoding approaches, namely frame resizing, frame dropping and requantization. By exploiting the synergy between cryptographic signature, forward error correction (FEC) and digital watermarking, the generated content-based message authentication code (MAC or keyed crypto hash) is embedded back into the video to reduce the transmission cost. The proposed scheme is secure against malicious attacks such as video frame insertion and alteration. System robustness and security are balanced in a configurable way (i.e., more robust the system is, less secure the system will be). Compressed-domain process makes the scheme computationally efficient. Furthermore, the proposed scheme is compliant with state-of-the-art public key infrastructure. Experimental results demonstrate the validity of the proposed scheme",
"Shifting from host-oriented to data-oriented, information-centric networking (ICN) adopts several key design principles, e.g., in-network caching, to cope with the tremendous internet growth. In the ICN setting, data to be distributed can be cached by ICN routers anywhere and accessed arbitrarily by customers without data publishers' permission, which imposes new challenges when achieving data access control: (i) security: How can data publishers protect data confidentiality (either data cached by ICN routers or data accessed by authorized users) even when an authorized user's decryption key was revoked or compromised, and (ii) scalability: How can data publishers leverage ICN's promising features and enforce access control without complicated key management or extensive communication. This paper addresses these challenges by using the new proposed dual-phase encryption that uniquely combines the ideas from one-time decryption key, proxy re-encryption and all-or-nothing transformation, while still being able to leverage ICN's features. Our analysis and performance show that our solution is highly efficient and provable secure under the existing security model."
]
} |
1907.11717 | 2966432599 | The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay. | In earlier work @cite_35 , the authors proposed a content access control scheme for ICN enabled wireless edge. The proposed one is an extension of @cite_23 , which employs the public-key based algorithm and shamir's secret sharing as a building block, named AccConF. To obtain a unique interpolating polynomial of shamir's scheme, AccConF espoused Lagrangian Interpolation technique. The calculation of Lagrangian Interpolation is a computationally expensive process. To reduce the client-side computational burden the publisher piggy backs an enabling block with each content, which encapsulates partially solved Lagrangian coefficients. | {
"cite_N": [
"@cite_35",
"@cite_23"
],
"mid": [
"1973956822",
"2963353797"
],
"abstract": [
"We design and analyze a method to extract secret keys from the randomness inherent to wireless channels. We study a channel model for a multipath wireless channel and exploit the channel diversity in generating secret key bits. We compare the key extraction methods based both on entire channel state information (CSI) and on single channel parameter such as the received signal strength indicators (RSSI). Due to the reduction in the degree-of-freedom when going from CSI to RSSI, the rate of key extraction based on CSI is far higher than that based on RSSI. This suggests that exploiting channel diversity and making CSI information available to higher layers would greatly benefit the secret key generation. We propose a key generation system based on low-density parity-check (LDPC) codes and describe the design and performance of two systems: one based on binary LDPC codes and the other (useful at higher signal-to-noise ratios) based on four-ary LDPC codes.",
"Due to the publicly-known deterministic character- istic of pilot tones, pilot-aware attack, by jamming, nulling and spoofing pilot tones, can significantly paralyze the uplink channel training in large-scale MISO-OFDM systems. To solve this, we in this paper develop an independence-checking coding based (ICCB) uplink training architecture for one-ring scattering scenarios allowing for uniform linear arrays (ULA) deployment. Here, we not only insert randomized pilots on subcarriers for channel impulse response (CIR) estimation, but also diversify and encode subcarrier activation patterns (SAPs) to convey those pilots simultaneously. The coded SAPs, though interfered by arbitrary unknown SAPs in wireless environment, are qualified to be reliably identified and decoded into the original pilots by checking the hidden channel independence existing in sub- carriers. Specifically, an independence-checking coding (ICC) theory is formulated to support the encoding decoding process in this architecture. The optimal ICC code is further devel- oped for guaranteeing a well-imposed estimation of CIR while maximizing the code rate. Based on this code, the identification error probability (IEP) is characterized to evaluate the reliability of this architecture. Interestingly, we discover the principle of IEP reduction by exploiting the array spatial correlation, and prove that zero- IEP, i.e., perfect reliability, can be guaranteed under continuously-distributed mean angle of arrival (AoA). Besides this, a novel closed form of IEP expression is derived in discretely-distributed case. Simulation results finally verify the effectiveness of the proposed architecture."
]
} |
1907.11717 | 2966432599 | The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay. | In work by @cite_30 , an access control realized by a flexible secure content distribution architecture, which combins the proxy re-encryption and identity-based encryption mechanisms. The publisher generates a symmetric key and encrypt the content before dissemination. To access the content from in-network cache or directly from publisher, a consumer first sends a request to publisher to acquires the symmetric encryption key. Upon receiving the key request, the publisher validates and verifies the authenticity of consumer, and sends the symmetric key encapsulated in response message encrypted with consumer’s identity. The proposed scheme eliminated the asymmetric encryption, but it is not clear that how the consumer’s private identity could be known to the content provider. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2080636584"
],
"abstract": [
"Distributed sensor networks are becoming a robust solution that allows users to directly access data generated by individual sensors. In many practical scenarios, fine-grained access control is a pivotal security requirement to enhance usability and protect sensitive sensor information from unauthorized access. Recently, there have been proposed many schemes to adapt public key cryptosystems into sensor systems consisting of high-end sensor nodes in order to enforce security policy efficiently. However, the drawback of these approaches is that the complexity of computation increases linear to the expressiveness of the access policy. Key-policy attribute-based encryption is a promising cryptographic solution to enforce fine-grained access policies on the sensor data. However, the problem of applying it to distributed sensor networks introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control scheme using KP-ABE with efficient attribute and user revocation capability for distributed sensor networks that are composed of high-end sensor devices. They can be achieved by the proxy encryption mechanism which takes advantage of attribute-based encryption and selective group key distribution. The analysis results indicate that the proposed scheme achieves efficient user access control while requiring the same computation overhead at each sensor as the previous schemes."
]
} |
1907.11717 | 2966432599 | The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay. | In other work @cite_33 , author proposed a content access control scheme based on proxy re-encryption. In proxy re-encryption the content is re-encrypted by an intermediate node. In proposed scheme the edge routers perform the content re-encryption. Upon receiving a content request, the publisher encrypts the data and a randomly generated key k1, using its public key. Upon receiving the content request, edge router generates a random key k2 encrypted by the publisher’s public key and signed by the edge router. Edge router sends the encrypted k2 to publisher and appends the encrypted k2 with the content and dispatch it towards consumer. Meanwhile, the publisher verifies the authenticity of consumer, and generates the content decryption key K using K1, K2 and public key. Upon receiving K the consumer can decrypt the content. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2116361063"
],
"abstract": [
"Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition."
]
} |
1907.11717 | 2966432599 | The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay. | In another study @cite_7 , the authors presented an access control scheme for the encrypted content in ICN, which is based on the efficient unidirectional proxy re-encryption (EU-PRE) proposed by @cite_3 . The proposed scheme, named efficient unidirectional re-encryption (EU-RE), simplifies EU-PRE by eliminating the need of proxies in the re-encryption operation. However, the EU-RE scheme is still based on asymmetric cryptography, which is not suitable for several resource constraint applications such as, IoT and sensor networks. Moreover, the authors made an assumption that the content provider behaves correctly, i.e., it does not distribute any private content or decryption rights to unauthorized users. However, this assumption falsifies the protocol claims defined in @cite_20 , which means EU-RE is weak against several attacks. To verify the protocol claims, we implemented EU-RE in an automated security protocol analysis tool, Scyther @cite_10 , and presented the results in . | {
"cite_N": [
"@cite_10",
"@cite_3",
"@cite_20",
"@cite_7"
],
"mid": [
"2116361063",
"2152926062",
"2765463836",
"2162567660"
],
"abstract": [
"Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition.",
"We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).",
"Using dynamic Searchable Symmetric Encryption, a user with limited storage resources can securely outsource a database to an untrusted server, in such a way that the database can still be searched and updated efficiently. For these schemes, it would be desirable that updates do not reveal any information a priori about the modifications they carry out, and that deleted results remain inaccessible to the server a posteriori. If the first property, called forward privacy, has been the main motivation of recent works, the second one, backward privacy, has been overlooked. In this paper, we study for the first time the notion of backward privacy for searchable encryption. After giving formal definitions for different flavors of backward privacy, we present several schemes achieving both forward and backward privacy, with various efficiency trade-offs. Our constructions crucially rely on primitives such as constrained pseudo-random functions and puncturable encryption schemes. Using these advanced cryptographic primitives allows for a fine-grained control of the power of the adversary, preventing her from evaluating functions on selected inputs, or decrypting specific ciphertexts. In turn, this high degree of control allows our SSE constructions to achieve the stronger forms of privacy outlined above. As an example, we present a framework to construct forward-private schemes from range-constrained pseudo-random functions. Finally, we provide experimental results for implementations of our schemes, and study their practical efficiency.",
"We survey the notion of provably secure searchable encryption (SE) by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS). Since the pioneering work of Song, Wagner, and Perrig (IEEE S&P '00), the field of provably secure SE has expanded to the point where we felt that taking stock would provide benefit to the community. The survey has been written primarily for the nonspecialist who has a basic information security background. Thus, we sacrifice full details and proofs of individual constructions in favor of an overview of the underlying key techniques. We categorize and compare the different SE schemes in terms of their security, efficiency, and functionality. For the experienced researcher, we point out connections between the many approaches to SE and identify open research problems. Two major conclusions can be drawn from our work. While the so-called IND-CKA2 security notion becomes prevalent in the literature and efficient (sublinear) SE schemes meeting this notion exist in the symmetric setting, achieving this strong form of security efficiently in the asymmetric setting remains an open problem. We observe that in multirecipient SE schemes, regardless of their efficiency drawbacks, there is a noticeable lack of query expressiveness that hinders deployment in practice."
]
} |
1907.11718 | 2964674144 | We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading. | The difficulty of seeking the global optimum for Markov Decision Process (MDP) problems under the MV criterion has been previously noted in @cite_30 . In fact, the variance of reward-to-go is nonlinear in expectation and, as a result of Bellman's inconsistency, most of the well-known RL algorithms cannot be applied directly. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2356031020"
],
"abstract": [
"In Markov decision processes (MDPs), the variance of the reward-to-go is a natural measure of uncertainty about the long term performance of a policy, and is important in domains such as finance, resource allocation, and process control. Currently however, there is no tractable procedure for calculating it in large scale MDPs. This is in contrast to the case of the expected reward-to-go, also known as the value function, for which effective simulation-based algorithms are known, and have been used successfully in various domains. In this paper we extend temporal difference (TD) learning algorithms to estimating the variance of the reward-to-go for a fixed policy. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in an option pricing problem. Our results show a dramatic improvement in terms of sample efficiency over standard Monte-Carlo methods, which are currently the state-of-the-art."
]
} |
1907.11718 | 2964674144 | We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading. | Existing works on variance estimation and control generally divide into value based methods and policy based methods. @cite_29 obtained the Bellman's equation for the variance of reward-to-go under a fixed , given policy. @cite_15 further derived the TD(0) learning rule to estimate the variance, followed by @cite_7 which applied this value based method to an MV portfolio selection problem. It is worth noting that due to the definition of the value function (i.e., the variance penalized expected reward-to-go) in @cite_7 , Bellman's optimality principle does not hold. As a result, it is not guaranteed that a greedy policy based on the latest updated value function will eventually lead to the true global optimal policy. The second approach, the policy based RL, was proposed in @cite_16 . They also extended the work to linear function approximators and devised actor-critic algorithms for MV optimization problems for which convergence to the local optimum is guaranteed with probability one ( @cite_17 ). Related works following this line of research include @cite_18 , @cite_4 , among others. Despite the various methods mentioned above, it remains an open and interesting question in RL to search for the global optimum under the MV criterion. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_29",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2963856199",
"2356031020",
"1925816294",
"2105078254",
"2951640156",
"2962802563",
"2804589917"
],
"abstract": [
"In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms that operate on three timescales--a TD critic on the fastest timescale, a policy gradient (actor) on the intermediate timescale, and a dual ascent for Lagrange multipliers on the slowest timescale. In the discounted setting, we point out the difficulty in estimating the gradient of the variance of the return and incorporate simultaneous perturbation approaches to alleviate this. The average setting, on the other hand, allows for an actor update using compatible features to estimate the gradient of the variance. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application.",
"In Markov decision processes (MDPs), the variance of the reward-to-go is a natural measure of uncertainty about the long term performance of a policy, and is important in domains such as finance, resource allocation, and process control. Currently however, there is no tractable procedure for calculating it in large scale MDPs. This is in contrast to the case of the expected reward-to-go, also known as the value function, for which effective simulation-based algorithms are known, and have been used successfully in various domains. In this paper we extend temporal difference (TD) learning algorithms to estimating the variance of the reward-to-go for a fixed policy. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in an option pricing problem. Our results show a dramatic improvement in terms of sample efficiency over standard Monte-Carlo methods, which are currently the state-of-the-art.",
"With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.",
"This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H∞ control, we consider a differential game in which a \"disturbing\" agent tries to make the worst possible disturbance while a \"control\" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H∞ control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.",
"In many finite horizon episodic reinforcement learning (RL) settings, it is desirable to optimize for the undiscounted return - in settings like Atari, for instance, the goal is to collect the most points while staying alive in the long run. Yet, it may be difficult (or even intractable) mathematically to learn with this target. As such, temporal discounting is often applied to optimize over a shorter effective planning horizon. This comes at the risk of potentially biasing the optimization target away from the undiscounted goal. In settings where this bias is unacceptable - where the system must optimize for longer horizons at higher discounts - the target of the value function approximator may increase in variance leading to difficulties in learning. We present an extension of temporal difference (TD) learning, which we call TD( @math ), that breaks down a value function into a series of components based on the differences between value functions with smaller discount factors. The separation of a longer horizon value function into these components has useful properties in scalability and performance. We discuss these properties and show theoretic and empirical improvements over standard TD learning in certain settings.",
"We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.",
"We devise a distributional variant of gradient temporal-difference (TD) learning. Distributional reinforcement learning has been demonstrated to outperform the regular one in the recent study . In the policy evaluation setting, we design two new algorithms called distributional GTD2 and distributional TDC using the Cram e r distance on the distributional version of the Bellman error objective function, which inherits advantages of both the nonlinear gradient TD algorithms and the distributional RL approach. In the control setting, we propose the distributional Greedy-GQ using the similar derivation. We prove the asymptotic almost-sure convergence of distributional GTD2 and TDC to a local optimal solution for general smooth function approximators, which includes neural networks that have been widely used in recent study to solve the real-life RL problems. In each step, the computational complexities of above three algorithms are linear w.r.t. the number of the parameters of the function approximator, thus can be implemented efficiently for neural networks."
]
} |
1907.11830 | 2966222024 | 360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications. | : Recent advances in 360 @math images resort to geometric information on the sphere. @cite_13 represent the ERP with a weighted graph, and apply the graph convolutional network to generate graph-based representations. @cite_9 propose SO(3) 3D rotation group for retrieval and classification tasks on spherical images. On top of that, @cite_30 suggest transforming the domain space from Euclidean S2 space to a SO(3) representation to reduce the distortion, and encoding rotation equivariance in the network. Meanwhile, some works attempt to solve the distortion in the ERP directly. @cite_15 transfer knowledge from a pre-trained CNN on perspective projections to a novel network on ERP. Other approaches @cite_32 @cite_6 @cite_28 refer to the idea of the deformable convolutional network @cite_18 , and propose the distortion-aware spherical convolution, where the convolutional filter get distorted in the same way as the objects on the ERP. Though SphConv is simple and effective, due to the implicit interpolation, it could not eliminate the distortion as the network grows deeper. To adjust the distortion from SphConv, we introduce a reprojection mechanism in Rep R-CNN, which significantly increases the detection accuracy. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_6",
"@cite_15",
"@cite_13"
],
"mid": [
"2109255472",
"2807007689",
"2963325056",
"2902303185",
"2774989306",
"2951665903",
"2951706743",
"2500786414"
],
"abstract": [
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 @math 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 @math faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.",
"Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.",
"Convolution as inner product has been the founding basis of convolutional neural networks (CNNs) and the key to end-to-end visual representation learning. Benefiting from deeper architectures, recent CNNs have demonstrated increasingly strong representation abilities. Despite such improvement, the increased depth and larger parameter space have also led to challenges in properly training a network. In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres. We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular, SphereNet adopts SphereConv as its basic convolution operator and is supervised by generalized angular softmax loss - a natural loss formulation under SphereConv. We show that SphereNet can effectively encode discriminative representation and alleviate training difficulty, leading to easier optimization, faster convergence and comparable (even better) classification accuracy over convolutional counterparts. We also provide some theoretical insights for the advantages of learning on hyperspheres. In addition, we introduce the learnable SphereConv, i.e., a natural improvement over prefixed SphereConv, and SphereNorm, i.e., hyperspherical learning as a normalization method. Experiments have verified our conclusions.",
"The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.",
"Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation.",
"Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation.",
"With deep learning becoming the dominant approach in computer vision, the use of representations extracted from Convolutional Neural Nets (CNNs) is quickly gaining ground on Fisher Vectors (FVs) as favoured state-of-the-art global image descriptors for image instance retrieval. While the good performance of CNNs for image classification are unambiguously recognised, which of the two has the upper hand in the image retrieval context is not entirely clear yet. In this work, we propose a comprehensive study that systematically evaluates FVs and CNNs for image retrieval. The first part compares the performances of FVs and CNNs on multiple publicly available data sets. We investigate a number of details specific to each method. For FVs, we compare sparse descriptors based on interest point detectors with dense single-scale and multi-scale variants. For CNNs, we focus on understanding the impact of depth, architecture and training data on retrieval results. Our study shows that no descriptor is systematically better than the other and that performance gains can usually be obtained by using both types together. The second part of the study focuses on the impact of geometrical transformations such as rotations and scale changes. FVs based on interest point detectors are intrinsically resilient to such transformations while CNNs do not have a built-in mechanism to ensure such invariance. We show that performance of CNNs can quickly degrade in presence of rotations while they are far less affected by changes in scale. We then propose a number of ways to incorporate the required invariances in the CNN pipeline. Overall, our work is intended as a reference guide offering practically useful and simply implementable guidelines to anyone looking for state-of-the-art global descriptors best suited to their specific image instance retrieval problem.",
"Despite the great success of convolutional neural networks (CNN) for the image classification task on datasets like Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with object images that have large variation in size and clutter, where Fisher Vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian Mixture Model (GMM). FV however has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this paper a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using backpropagation. Our proposed FisherNet combines convolutional neural network training and Fisher Vector encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL VOC object classification task."
]
} |
1907.11830 | 2966222024 | 360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications. | : The promising modern object detectors are usually based on two-stage approaches. The Region-based CNN (R-CNN) approach @cite_17 attends to a set of candidate region proposals @cite_12 in the first stage, and then uses a convolutional network to regress the bounding boxes and classify the objects in the second stage. Fast R-CNN @cite_25 extends R-CNN by extracting the proposals directly on feature maps using RoI pooling. Faster R-CNN @cite_2 further replaces the slow selective search with a fast region proposal network, achieving improvements on both speed and accuracy. Numerous extensions have been proposed to this framework @cite_1 @cite_7 @cite_19 @cite_26 . Compared with two-stage approaches, the single-stage pipeline skips the object proposal stage and generates detection and classification directly, such as SSD @cite_14 @cite_23 and YOLO @cite_3 @cite_8 @cite_11 . Though these single-stage pipelines attract interests owing to their fast speed, they lack the alignments of the proposals, which is important for 360 @math object detection. Hence, we adopt the two-stage method in this paper. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2589615404",
"2610420510",
"2743620784",
"2769291631",
"2342242867",
"2895236117",
"2953226057",
"2797527871",
"2950568924",
"639708223",
"2237643543",
"2963873508",
"1846926097"
],
"abstract": [
"Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark.",
"Many modern approaches for object detection are two-staged pipelines. The first stage identifies regions of interest which are then classified in the second stage. Faster R-CNN is such an approach for object detection which combines both stages into a single pipeline. In this paper we apply Faster R-CNN to the task of company logo detection. Motivated by its weak performance on small object instances, we examine in detail both the proposal and the classification stage with respect to a wide range of object sizes. We investigate the influence of feature map resolution on the performance of those stages. Based on theoretical considerations, we introduce an improved scheme for generating anchor proposals and propose a modification to Faster R-CNN which leverages higher-resolution feature maps for small objects. We evaluate our approach on the FlickrLogos dataset improving the RPN performance from 0.52 to 0.71 (MABO) and the detection performance from 0.52 to @math (mAP).",
"The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.",
"In this paper, we first investigate why typical two-stage methods are not as fast as single-stage, fast detectors like YOLO and SSD. We find that Faster R-CNN and R-FCN perform an intensive computation after or before RoI warping. Faster R-CNN involves two fully connected layers for RoI recognition, while R-FCN produces a large score maps. Thus, the speed of these networks is slow due to the heavy-head design in the architecture. Even if we significantly reduce the base model, the computation cost cannot be largely decreased accordingly. We propose a new two-stage detector, Light-Head R-CNN, to address the shortcoming in current two-stage approaches. In our design, we make the head of network as light as possible, by using a thin feature map and a cheap R-CNN subnet (pooling and single fully-connected layer). Our ResNet-101 based light-head R-CNN outperforms state-of-art object detectors on COCO while keeping time efficiency. More importantly, simply replacing the backbone with a tiny network (e.g, Xception), our Light-Head R-CNN gets 30.7 mmAP at 102 FPS on COCO, significantly outperforming the single-stage, fast detectors like YOLO and SSD on both speed and accuracy. Code will be made publicly available.",
"In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.",
"The Convolutional Neural Network (CNN) based region proposal generation method (i.e. region proposal network), trained using bounding box annotations, is an essential component in modern fully supervised object detectors. However, Weakly Supervised Object Detection (WSOD) has not benefited from CNN-based proposal generation due to the absence of bounding box annotations, and is relying on standard proposal generation methods such as selective search. In this paper, we propose a weakly supervised region proposal network which is trained using only image-level annotations. The weakly supervised region proposal network consists of two stages. The first stage evaluates the objectness scores of sliding window boxes by exploiting the low-level information in CNN and the second stage refines the proposals from the first stage using a region-based CNN classifier. Our proposed region proposal network is suitable for WSOD, can be plugged into a WSOD network easily, and can share its convolutional computations with the WSOD network. Experiments on the PASCAL VOC and ImageNet detection datasets show that our method achieves the state-of-the-art performance for WSOD with performance gain of about (3 ) on average.",
"Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are \"deep in context\". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.",
"Recent CNN based object detectors, no matter one-stage methods like YOLO, SSD, and RetinaNe or two-stage detectors like Faster R-CNN, R-FCN and FPN are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. 1. Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. 2. Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet (4.8G FLOPs) backbone. The code will be released for the reproduction.",
"Existing object proposal approaches use primarily bottom-up cues to rank proposals, while we believe that objectness is in fact a high level construct. We argue for a data-driven, semantic approach for ranking object proposals. Our framework, which we call DeepBox, uses convolutional neural networks (CNNs) to rerank proposals from a bottom-up method. We use a novel four-layer CNN architecture that is as good as much larger networks on the task of evaluating objectness while being much faster. We show that DeepBox significantly improves over the bottom-up ranking, achieving the same recall with 500 proposals as achieved by bottom-up methods with 2000. This improvement generalizes to categories the CNN has never seen before and leads to a 4.5-point gain in detection mAP. Our implementation achieves this performance while running at 260 ms per image.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.",
"We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.",
"Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in image classification, but still show room for improvement in object-detection tasks with many categories, in particular for cluttered scenes and occlusion. Modern detection algorithms like Regions with CNNs (, 2014) rely on Selective Search (, 2013) to propose regions which with high probability represent objects, where in turn CNNs are deployed for classification. Selective Search represents a family of sophisticated algorithms that are engineered with multiple segmentation, appearance and saliency cues, typically coming with a significant run-time overhead. Furthermore, (, 2014) have shown that most methods suffer from low reproducibility due to unstable superpixels, even for slight image perturbations. Although CNNs are subsequently used for classification in top-performing object-detection pipelines, current proposal methods are agnostic to how these models parse objects and their rich learned representations. As a result they may propose regions which may not resemble high-level objects or totally miss some of them. To overcome these drawbacks we propose a boosting approach which directly takes advantage of hierarchical CNN features for detecting regions of interest fast. We demonstrate its performance on ImageNet 2013 detection benchmark and compare it with state-of-the-art methods."
]
} |
1907.11830 | 2966222024 | 360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications. | : Object detection in spherical images is an emerging task in computer vision, and several efforts @cite_32 @cite_15 @cite_31 have been made to push forward this issue. @cite_15 utilize the network distillation in the network. This approach applies regular CNN to a specific tangent plane with origin aligned to the object center to generate region proposals. They construct a synthetic dataset by projecting objects in 2D images onto a sphere. Specifically, for each image in the dataset, they select a single bounding box in the image and project it onto the 180th meridian of the sphere with different polar angles. @cite_31 exploit a perspective-projection based detector on a real-world dataset. However, they annotate the objects with rectangular regions on ERP, which should have been distorted on the sphere. Meanwhile, @cite_32 attach the rendered 3D car images to the real-world omnidirectional images and create the synthetic FlyingCars dataset. To solve the distortion in ERP, they utilize the spherical convolution, and apply it to a vanilla SSD. | {
"cite_N": [
"@cite_15",
"@cite_31",
"@cite_32"
],
"mid": [
"2963809933",
"2963721253",
"2725486421"
],
"abstract": [
"The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.",
"This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L 1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L 1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40 while remaining highly competitive in terms of processing time.",
"In this paper, we propose a novel method called Rotational Region CNN (R2CNN) for detecting arbitrary-oriented texts in natural scene images. The framework is based on Faster R-CNN [1] architecture. First, we use the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations. Second, for each axis-aligned text box proposed by RPN, we extract its pooled features with different pooled sizes and the concatenated features are used to simultaneously predict the text non-text score, axis-aligned box and inclined minimum area box. At last, we use an inclined non-maximum suppression to get the detection results. Our approach achieves competitive results on text detection benchmarks: ICDAR 2015 and ICDAR 2013."
]
} |
1907.11357 | 2965380104 | As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card. | Real-time semantic segmentation network requires finding a trade-off between high-quality prediction and high-inference speed. ENet @cite_8 is the first network to be designed in real time, it trims a great number of convolution filters to reduce computation. ICNet @cite_21 proposes an image cascade network that incorporates multi-resolution branches. ERFNet @cite_20 uses residual connections and factorized convolutions to remain efficient while retaining remarkable accuracy. More recently, ESPNet @cite_29 introduces an efficient spatial pyramid (ESP), which brings great improvement in both speed and performance. BiSeNet @cite_2 proposes two paths to combine spatial information and context information. These networks successfully made a trade-off between speed and performance, but there is still sufficient space for further improvement. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_2",
"@cite_20"
],
"mid": [
"2611259176",
"2964217532",
"2790933182",
"2563705555",
"2951402970"
],
"abstract": [
"We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8 less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date."
]
} |
1907.11357 | 2965380104 | As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card. | Dilated convolution @cite_23 inserts zeros between each pixel in a standard convolution, which leads to a large effective receptive field without increasing parameters, hence it is generally used in semantic segmentation models. In DeepLab series @cite_40 @cite_35 @cite_3 , an atrous spatial pyramid pooling (ASPP) module is introduced which employs multiple parallel filters with different dilation rates to collect multi-scale information. DenseASPP @cite_39 concatenates a set of dilated convolution layers to generate dense multi-scale feature representation. Most of the state-of-the-art networks in semantic segmentation exploit dilated convolution, which proves its effectiveness in pixel-level prediction task. | {
"cite_N": [
"@cite_35",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_23"
],
"mid": [
"2412782625",
"2952865063",
"2895340898",
"2950510876",
"2799213142"
],
"abstract": [
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM). A Pyramid Dilated Convolution (PDC) module is first designed for simultaneously extracting spatial features at multiple scales. These spatial features are then concatenated and fed into an extended Deeper Bidirectional ConvLSTM (DB-ConvLSTM) to learn spatiotemporal information. Forward and backward ConvLSTM units are placed in two layers and connected in a cascaded way, encouraging information flow between the bi-directional streams and leading to deeper feature extraction. We further augment DB-ConvLSTM with a PDC-like structure, by adopting several dilated DB-ConvLSTMs to extract multi-scale spatiotemporal information. Extensive experimental results show that our method outperforms previous video saliency models in a large margin, with a real-time speed of 20 fps on a single GPU. With unsupervised video object segmentation as an example application, the proposed model (with a CRF-based post-process) achieves state-of-the-art results on two popular benchmarks, well demonstrating its superior performance and high applicability.",
"Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at this https URL .",
"Semantic image segmentation is a basic street scene understanding task in autonomous driving, where each pixel in a high resolution image is categorized into a set of semantic labels. Unlike other scenarios, objects in autonomous driving scene exhibit very large scale changes, which poses great challenges for high-level feature representation in a sense that multi-scale information must be correctly encoded. To remedy this problem, atrous convolution[14]was introduced to generate features with larger receptive fields without sacrificing spatial resolution. Built upon atrous convolution, Atrous Spatial Pyramid Pooling (ASPP)[2] was proposed to concatenate multiple atrous-convolved features using different dilation rates into a final feature representation. Although ASPP is able to generate multi-scale features, we argue the feature resolution in the scale-axis is not dense enough for the autonomous driving scenario. To this end, we propose Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), which connects a set of atrous convolutional layers in a dense way, such that it generates multi-scale features that not only cover a larger scale range, but also cover that scale range densely, without significantly increasing the model size. We evaluate DenseASPP on the street scene benchmark Cityscapes[4] and achieve state-of-the-art performance."
]
} |
1907.11357 | 2965380104 | As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card. | Convolution factorization divides a standard convolution operation into several steps to reduce the computational cost and memory, which is extensively adopted in lightweight CNN models. Inception @cite_11 @cite_25 @cite_1 employ several small-sized convolutions to replace the convolution with large kernel size while maintaining the size of the receptive field. Xception @cite_36 and MobileNet @cite_0 use the depth-wise separable convolution to reduce the amount of computation with only a slight drop in performance. MobileNetV2 @cite_15 proposes an inverted residual block and linear bottlenecks to further improve the performance. ShuffleNet @cite_17 applies the point-wise group convolution with channel shuffle operation to enable information communication between different groups of channels. | {
"cite_N": [
"@cite_36",
"@cite_1",
"@cite_17",
"@cite_0",
"@cite_15",
"@cite_25",
"@cite_11"
],
"mid": [
"2788715907",
"2885059312",
"2963705792",
"2963048316",
"2515385951",
"2962965870",
"1570197553"
],
"abstract": [
"In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1 , 79.7 and 60.9 , respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.",
"We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \"deep kernels\", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84 classification error on MNIST, a new record for GPs with a comparable number of parameters.",
"Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. @PARASPLIT We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1-7.3x convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers.",
"Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy."
]
} |
1907.11484 | 2966798122 | In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method. | Domain adaptation is a technique that adapts a model trained in one domain to another. Many related works try to define and minimize the distance of feature distributions between the data from different domains @cite_6 @cite_3 @cite_0 @cite_23 @cite_14 @cite_11 . For example, deep domain confusion (DDC) model @cite_14 explores invariant representations between different domains by minimizing the maximum mean discrepancy (MMD) of feature distributions. Long al propose to adapt all task-specific layers and explore multiple kernel variants of MMD @cite_3 . Ganin and Lempitsky report using the adversarial learning to achieve domain adaptation and learning the distance with the discriminator @cite_6 . Saito al propose to maximize the discrepancy between two classifiers’ output to align distributions @cite_10 . Most of the mentioned works above are designed for classification or segmentation. | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_10",
"@cite_11"
],
"mid": [
"2962970380",
"2607350342",
"1594039573",
"2811444512",
"2767179670",
"2611292810",
"2964285681"
],
"abstract": [
"Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on several visual domain adaptation benchmarks.",
"Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins.",
"Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.",
"Domain adaptation is a promising technique when addressing limited or no labeled target data by borrowing well-labeled knowledge from the auxiliary source data. Recently, researchers have exploited multi-layer structures for discriminative feature learning to reduce the domain discrepancy. However, there are limited research efforts on simultaneously building a deep structure and a discriminative classifier over both labeled source and unlabeled target. In this paper, we propose a semi-supervised deep domain adaptation framework, in which the multi-layer feature extractor and a multi-class classifier are jointly learned to benefit from each other. Specifically, we develop a novel semi-supervised class-wise adaptation manner to fight off the conditional distribution mismatch between two domains by assigning a probabilistic label to each target sample, i.e., multiple class labels with different probabilities. Furthermore, a multi-class classifier is simultaneously trained on labeled source and unlabeled target samples in a semi-supervised fashion. In this way, the deep structure can formally alleviate the domain divergence and enhance the feature transferability. Experimental evaluations on several standard cross-domain benchmarks verify the superiority of our proposed approach.",
"In multimedia analysis, the task of domain adaptation is to adapt the feature representation learned in the source domain with rich label information to the target domain with less or even no label information. Significant research endeavors have been devoted to aligning the feature distributions between the source and the target domains in the top fully connected layers based on unsupervised DNN-based models. However, the domain adaptation has been arbitrarily constrained near the output ends of the DNN models, which thus brings about inadequate knowledge transfer in DNN-based domain adaptation process, especially near the input end. We develop an attention transfer process for convolutional domain adaptation. The domain discrepancy, measured in correlation alignment loss, is minimized on the second-order correlation statistics of the attention maps for both source and target domains. Then we propose Deep Unsupervised Convolutional Domain Adaptation DUCDA method, which jointly minimizes the supervised classification loss of labeled source data and the unsupervised correlation alignment loss measured on both convolutional layers and fully connected layers. The multi-layer domain adaptation process collaborately reinforces each individual domain adaptation component, and significantly enhances the generalization ability of the CNN models. Extensive cross-domain object classification experiments show DUCDA outperforms other state-of-the-art approaches, and validate the promising power of DUCDA towards large scale real world application.",
"In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.",
"In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation."
]
} |
1907.11484 | 2966798122 | In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method. | Huang al propose that aligning the distributions of activations of intermediate layers can alleviate the covariate shift @cite_20 . This idea is similar to our work partly. However, instead of using a least squares generative adversarial network(LSGAN) @cite_18 loss to align distributions for semantic segmentation, we use multi-level image patch loss for object detection. | {
"cite_N": [
"@cite_18",
"@cite_20"
],
"mid": [
"2895168809",
"2738171447"
],
"abstract": [
"We introduce a layer-wise unsupervised domain adaptation approach for semantic segmentation. Instead of merely matching the output distributions of the source and target domains, our approach aligns the distributions of activations of intermediate layers. This scheme exhibits two key advantages. First, matching across intermediate layers introduces more constraints for training the network in the target domain, making the optimization problem better conditioned. Second, the matched activations at each layer provide similar inputs to the next layer for both training and adaptation, and thus alleviate covariate shift. We use a Generative Adversarial Network (or GAN) to align activation distributions. Experimental results show that our approach achieves state-of-the-art results on a variety of popular domain adaptation tasks, including (1) from GTA to Cityscapes for semantic segmentation, (2) from SYNTHIA to Cityscapes for semantic segmentation, and (3) adaptations on USPS and MNIST for image classification (The website of this paper is https: rsents.github.io dam.html).",
"Recent work has demonstrated the emergence of semantic object-part detectors in activation patterns of convolutional neural networks (CNNs), but did not account for the distributed multi-layer neural activations in such networks. In this work, we propose a novel method to extract distributed patterns of activations from a CNN and show that such patterns correspond to high-level visual attributes. We propose an unsupervised learning module that sits above a pre-trained CNN and learns distributed activation patterns of the network. We utilize elastic non-negative matrix factorization to analyze the responses of a pretrained CNN to an input image and extract salient image regions. The corresponding patterns of neural activations for the extracted salient regions are then clustered via unsupervised deep embedding for clustering (DEC) framework. We demonstrate that these distributed activations contain high-level image features that could be explicitly used for image classification."
]
} |
1907.11468 | 2965936489 | Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature. | Neuro-symbolic approaches @cite_30 express the internal or output structure of the learner using logic. First-Order Logic (FOL) is often selected as the declarative framework for the knowledge because of its flexibility and expressive power. This class of methodologies is rooted in previous work from the Statistical Relational Learning community, which developed frameworks for performing logic inference in the presence of uncertainty. For example, Markov Logic Networks @cite_2 and Probabilistic Soft Logic @cite_19 integrate First Order Logic (FOL) and graphical models. A common solution to integrate logic reasoning with uncertainty and deep learning relies on using deep networks to approximate the FOL predicates, and the overall architecture is optimized end-to-end by relaxing the FOL into a differentiable form, which translates into a set of constraints. This approach is followed with minor variants by Semantic Based Regularization @cite_28 , the Lyrics framework @cite_17 , Logic Tensor Networks @cite_14 , the Semantic Loss @cite_0 and DeepProbLog @cite_4 extending the ProbLog @cite_13 @cite_15 framework by using predicates approximated by jointly learned functions. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_28",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"2617995259",
"2556644972",
"2048919592",
"1986767090",
"2921865277",
"2963611534",
"2725395424",
"2201744460",
"2250322043",
"2621167405"
],
"abstract": [
"We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"The effective integration of learning and reasoning is a well-known and challenging area of research within artificial intelligence. Neural-symbolic systems seek to integrate learning and reasoning by combining neural networks and symbolic knowledge representation. In this paper, a novel methodology is proposed for the extraction of relational knowledge from neural networks which are trainable by the efficient application of the backpropagation learning algorithm. First-order logic rules are extracted from the neural networks, offering interpretable symbolic relational models on which logical reasoning can be performed. The wellknown knowledge extraction algorithm TREPAN was adapted and incorporated into the first-order version of the neural-symbolic system CILP++. Empirical results obtained in comparison with a probabilistic model for relational learning, Markov Logic Networks, and a state-of-the-art Inductive Logic Programming system, Aleph, indicate that the proposed methodology achieves competitive accuracy results consistently in all datasets investigated, while either Markov Logic Networks or Aleph show considerably worse results in at least one dataset. It is expected that effective knowledge extraction from neural networks can contribute to the integration of heterogeneous knowledge representations.",
"We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.",
"Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 of features can be achieved with a small loss of accuracy.",
"In spite of the amazing results obtained by deep learning in many applications, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference. Therefore, there is a clear need for the definition of a general and tight integration between low-level tasks, processing sensorial data that can be effectively elaborated using deep learning techniques, and the logic reasoning that allows humans to take decisions in complex environments. This paper presents LYRICS, a generic interface layer for AI, which is implemented in TersorFlow (TF). LYRICS provides an input language that allows to define arbitrary First Order Logic (FOL) background knowledge. The predicates and functions of the FOL knowledge can be bound to any TF computational graph, and the formulas are converted into a set of real-valued constraints, which participate to the overall optimization problem. This allows to learn the weights of the learners, under the constraints imposed by the prior knowledge. The framework is extremely general as it imposes no restrictions in terms of which models or knowledge can be integrated. In this paper, we show the generality of the approach showing some use cases of the presented language, including generative models, logic reasoning, model checking and supervised learning.",
"Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines.",
"We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space. We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model. This approach is inspired by a recently-developed differentiable logic called TensorLog, where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose these operations. Empirically, our method outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies.",
"Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.",
"Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. We develop a system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic. Automatic extraction, while scalable, often results in knowledge that is incomplete and noisy, motivating use of reasoning mechanisms that handle uncertainty. Markov Logic Networks (MLNs) seem a natural model for expressing such knowledge, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Our third approach, called Praline, uses MLNs to align lexical elements as well as define and control how inference should be performed in this task. Praline demonstrates a 15 accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches.",
"This paper presents a revision of Real Logic and its implementation with Logic Tensor Networks and its application to Semantic Image Interpretation. Real Logic is a framework where learning from numerical data and logical reasoning are integrated using first order logic syntax. The symbols of the signature of Real Logic are interpreted in the data-space, i.e, on the domain of real numbers. The integration of learning and reasoning obtained in Real Logic allows us to formalize learning as approximate satisfiability in the presence of logical constraints, and to perform inference on symbolic and numerical data. After introducing a refined version of the formalism, we describe its implementation into Logic Tensor Networks which uses deep learning within Google's T ensor F low ™. We evaluate LTN on the task of classifying objects and their parts in images, where we combine state-of-the-art-object detectors with a part-of ontology. LTN outperforms the state-of-the-art on object classification, and improves the performances on part-of relation detection with respect to a rule-based baseline."
]
} |
1907.11468 | 2965936489 | Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature. | Within this class of approaches, it is of fundamental importance to define how to perform the fuzzy relaxation of the formulas in the knowledge base. For instance, @cite_31 introduces a learning framework where formulas are converted according to ukasiewicz logic t-norm and t-conorms. @cite_18 also proposes to convert the formulas according to ukasiewicz logic, however this paper exploits the weak conjunction in place of the t-norms to get convex functional constraints. A more practical approach has been considered in (SBR), where all the fundamental t-norms have been evaluated on different learning tasks @cite_28 . However, it does not emerge from this prior work a unified principle to express the cost function to be optimized with respect to the selected fuzzy logic. For example, all the aforementioned approaches rely on a fixed loss function linearly measuring the distance of the formulas from the 1-value. Even if it may be justified from a logical point of view ( @math ), it is not clear whether this choice is principled from a learning standpoint, since all deep learning approaches use very different loss functions to enforce the fitting of the supervised data. | {
"cite_N": [
"@cite_28",
"@cite_31",
"@cite_18"
],
"mid": [
"2201744460",
"2793731154",
"2803642127"
],
"abstract": [
"Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.",
"We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any norm on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random adversarial input transform) and regularization techniques (such as weight decay, dropout, and batch norm).",
"We study binary classification in the presence of class-conditional random noise, where the learner gets to see labels that are flipped independently with some probability, and where the flip probability depends on the class. Our goal is to devise learning algorithms that are efficient and statistically consistent with respect to commonly used utility measures. In particular, we look at a family of measures motivated by their application in domains where cost-sensitive learning is necessary (for example, when there is class imbalance). In contrast to most of the existing literature on consistent classification that are limited to the classical 0-1 loss, our analysis includes more general utility measures such as the AM measure (arithmetic mean of True Positive Rate and True Negative Rate). For this problem of cost-sensitive learning under class-conditional random noise, we develop two approaches that are based on suitably modifying surrogate losses. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical utility maximization in the presence of i.i.d. data with noisy labels. If the loss function satis_es a simple symmetry condition, we show that using unbiased estimator leads to an efficient algorithm for empirical maximization. Second, by leveraging a reduction of risk minimization under noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong utility bounds. This approach implies that methods already used in practice, such as biased SVM and weighted logistic regression, are provably noise-tolerant. For two practically important measures in our family, we show that the proposed methods are competitive with respect to recently proposed methods for dealing with label noise in several benchmark data sets."
]
} |
1907.11468 | 2965936489 | Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature. | From a learning point of view, different quantifier conversions can be taken into account and validated, as well. For instance, the arithmetic mean and maximum operator have been used to convert the universal and existential quantifiers in @cite_28 , respectively. Different possibilities have been considered for the universal quantifier in @cite_14 , while the existential quantifier depends on this choice via the application of the strong negation using the DeMorgan law. The Arithmetic mean operator has been shown to achieve better performances in the conversion of the universal quantifier @cite_14 , with the existential quantifier implemented by Skolemization. However, the universal and existential quantifiers can be thought of as a generalized AND and OR, respectively. Therefore, converting the quantifiers using a mean operator has no direct justification inside a logic theory. | {
"cite_N": [
"@cite_28",
"@cite_14"
],
"mid": [
"1511737804",
"2175917793"
],
"abstract": [
"Motivated by applications to program verification, we study a decision procedure for satisfiability in an expressive fragment of a theory of arrays, which is parameterized by the theories of the array elements. The decision procedure reduces satisfiability of a formula of the fragment to satisfiability of an equisatisfiable quantifier-free formula in the combined theory of equality with uninterpreted functions (EUF), Presburger arithmetic, and the element theories. This fragment allows a constrained use of universal quantification, so that one quantifier alternation is allowed, with some syntactic restrictions. It allows expressing, for example, that an assertion holds for all elements in a given index range, that two arrays are equal in a given range, or that an array is sorted. We demonstrate its expressiveness through applications to verification of sorting algorithms and parameterized systems. We also prove that satisfiability is undecidable for several natural extensions to the fragment. Finally, we describe our implementation in the πVC verifying compiler.",
"Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn't mean that the information is worthless --- rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations."
]
} |
1907.11322 | 2966078947 | By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well. | A factor that can help expand the IoT is to create confidence for users of this technology in terms of maintaining their privacy and security, because if there is no proper security in its infrastructure, damage to IoT-based equipment, the possibility of losing personal information, the loss of privacy, and even the disclosure of economic and other data, are highly likely, and this may lead to the inability to use it in critical applications. Ronen and Shamir in @cite_6 pointed out that if security is not taken into account in the IoT-based infrastructure, the technology will threaten the future of the world as a nuclear bomb. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2573616158"
],
"abstract": [
"Internet of Things (IoT) is a technology in which for any object the ability to send data via communications networks is provided. Ensuring the security of Internet services and applications is an important factor in attracting users to use this platform. In the other words, if people are unable to trust that the equipment and information will be reasonably safe against damage, abuse and the other security threats, this lack of trust leads to a reduction in the use of IoT-based applications. Recently, Tewari and Gupta (J Supercomput 1–18, 2016) have proposed an ultralightweight RFID authentication protocol to provide desired security for objects in IoT. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is ‘1’ while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We verify the correctness of the presented attack by simulation."
]
} |
1907.11322 | 2966078947 | By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well. | Reviewing the proposed mechanisms and designing new models that are compatible with IoT-based devices are also very important. Given that an IoT system can include many objects with limited resources, it requires special protocols to ensure that privacy and security are guaranteed. Therefore, with the further development of IoT, its security concerns are expected to receive more attention. So far, several security protocols have been proposed to ensure IoT security, e.g. @cite_14 @cite_26 @cite_2 @cite_22 @cite_12 , however, most of them have failed in providing their security goals @cite_8 @cite_4 @cite_0 @cite_30 @cite_9 @cite_29 and various attacks, such as the protocol's secret values disclosure, DoS, traceability, impersonation and etc. were reported against them. The presentation of these attacks resulted in the development of the protocol's designing knowledge and the protocol designers are designing their protocols in such a way as to be as safe as the published attacks so far. Unfortunately, there are still attacks against newly designed protocols, and this science has not yet matured. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_0",
"@cite_2",
"@cite_12"
],
"mid": [
"2962487379",
"2887736592",
"2766595485",
"2755191230",
"2754804809",
"1565176535",
"2766954398",
"2043127616",
"2573616158",
"2887792592",
"2804338997"
],
"abstract": [
"Abstract The safety of medical data and equipment plays a vital role in today’s world of Medical Internet of Things (MIoT). These IoT devices have many constraints (e.g., memory size, processing capacity, and power consumption) that make it challenging to use cost-effective and energy-efficient security solutions. Recently, researchers have proposed a few Radio-Frequency Identification (RFID) based security solutions for MIoT. The use of RFID technology in securing IoT systems is rapidly increasing because it provides secure and lightweight safety mechanisms for these systems. More recently, authors have proposed a lightweight RFID mutual authentication (LRMI) protocol. The authors argue that LRMI meets the necessary security requirements for RFID systems, and the same applies to MIoT applications as well. In this paper, our contribution has two-folds, firstly we analyze the LRMI protocol’s security to demonstrate that it is vulnerable to various attacks such as secret disclosure, reader impersonation, and tag traceability. Also, it is not able to preserve the anonymity of the tag and the reader. Secondly, we propose a new secure and lightweight mutual RFID authentication (SecLAP) protocol, which provides secure communication and preserves privacy in MIoT systems. Our security analysis shows that the SecLAP protocol is robust against de-synchronization, replay, reader tag impersonation, and traceability attacks, and it ensures forward and backward data communication security. We use Burrows-Abadi-Needham (BAN) logic to validate the security features of SecLAP. Moreover, we compare SecLAP with the state-of-the-art and validate its performance through a Field Programmable Gate Array (FPGA) implementation, which shows that it is lightweight, consumes fewer resources on tags concerning computation functions, and requires less number of flows.",
"Internet of Things (IoT) has stimulated great interest in many researchers owing to its capability to connect billions of physical devices to the internet via heterogeneous access network. Security is a paramount aspect of IoT that needs to be addressed urgently to keep sensitive data private. However, from previous research studies, a number of security flaws in terms of keeping data private can be identified. Tewari and Gupta proposed an ultra-lightweight mutual authentication pRotocol that utilizes bitwise operation to achieve security in IoT networks that use RFID tags. The pRotocol is improved by Wang et. al. to prevent a full key disclosure attack. However, this paper shows that both of the pRotocols are susceptible to full disclosure, man-in-the-middle, tracking, and de-synchronization attacks. A detailed security analysis is conducted and results are presented to prove their vulnerability. Based on the aforementioned analysis, the pRotocol is modified and improved using a three pass mutual authentication. GNY logic is used to formally verify the security of the pRotocol.",
"Abstract In large-scale Internet of Things (IoT) systems, huge volumes of data are collected from anywhere at any time, which may invade people’s privacy, especially when the systems are used in medical or daily living environments. Preserving privacy is an important issue, and higher privacy demands usually tend to require weaker identity. However, previous research has indicated that strong security tends to demand strong identity, especially in authentication processes. Thus, defining a good tradeoff between privacy and security remains a challenging problem. This motivates us to develop a privacy-preserving and accountable authentication protocol for IoT end-devices with weaker identity, which integrates an adapted construction of short group signatures and Shamir’s secret sharing scheme. We analyze the security properties of our protocol in the context of six typical attacks and verify the formal security using the Proverif tool. Experiments using our implementation in MacBook Pro and Intel Edison development platforms show that our authentication protocol is feasible in practice.",
"In recent years, RFID (radio-frequency identification) systems are widely used in many applications. One of the most important applications for this technology is the Internet of things (IoT). Therefore, researchers have proposed several authentication protocols that can be employed in RFID-based IoT systems, and they have claimed that their protocols can satisfy all security requirements of these systems. However, in RFID-based IoT systems we have mobile readers that can be compromised by the adversary. Due to this attack, the adversary can compromise a legitimate reader and obtain its secrets. So, the protocol designers must consider the security of their proposals even in the reader compromised scenario. In this paper, we consider the security of the ultra-lightweight RFID mutual authentication (ULRMAPC) protocol recently proposed by They claimed that their protocol could be applied in the IoT systems and provide strong security. However, in this paper we show that their protocol is vulnerable to denial of service, reader and tag impersonation and de-synchronization attacks. To provide a solution, we present a new authentication protocol, which is more secure than the ULRMAPC protocol and also can be employed in RFID-based IoT systems.",
"Security experts have demonstrated numerous risks imposed by Internet of Things (IoT) devices on organizations. Due to the widespread adoption of such devices, their diversity, standardization obstacles, and inherent mobility, organizations require an intelligent mechanism capable of automatically detecting suspicious IoT devices connected to their networks. In particular, devices not included in a white list of trustworthy IoT device types (allowed to be used within the organizational premises) should be detected. In this research, Random Forest, a supervised machine learning algorithm, was applied to features extracted from network traffic data with the aim of accurately identifying IoT device types from the white list. To train and evaluate multi-class classifiers, we collected and manually labeled network traffic data from 17 distinct IoT devices, representing nine types of IoT devices. Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96 of test cases (on average), and white listed device types were correctly classified by their actual types in 99 of cases. Some IoT device types were identified quicker than others (e.g., sockets and thermostats were successfully detected within five TCP sessions of connecting to the network). Perfect detection of unauthorized IoT device types was achieved upon analyzing 110 consecutive sessions; perfect classification of white listed types required 346 consecutive sessions, 110 of which resulted in 99.49 accuracy. Further experiments demonstrated the successful applicability of classifiers trained in one location and tested on another. In addition, a discussion is provided regarding the resilience of our machine learning-based IoT white listing method to adversarial attacks.",
"The Internet of Things (IoT) will feature pervasive sensing and control capabilities via a massive deployment of machine-type communication (MTC) devices. The limited hardware, low-complexity, and severe energy constraints of MTC devices present unique communication and security challenges. As a result, robust physical-layer security methods that can supplement or even replace lightweight cryptographic protocols are appealing solutions. In this paper, we present an overview of low-complexity physical-layer security schemes that are suitable for the IoT. A local IoT deployment is modeled as a composition of multiple sensor and data subnetworks, with uplink communications from sensors to controllers, and downlink communications from controllers to actuators. The state of the art in physical-layer security for sensor networks is reviewed, followed by an overview of communication network security techniques. We then pinpoint the most energy-efficient and low-complexity security techniques that are best suited for IoT sensing applications. This is followed by a discussion of candidate low-complexity schemes for communication security, such as on–off switching and space-time block codes. The paper concludes by discussing open research issues and avenues for further work, especially the need for a theoretically well-founded and holistic approach for incorporating complexity constraints in physical-layer security designs.",
"In many Internet of Thing (IoT) application domains security is a critical requirement, because malicious parties can undermine the effectiveness of IoT-based systems by compromising single components and or communication channels. Thus, a security infrastructure is needed to ensure the proper functioning of such systems even under attack. However, it is also critical that security be at a reasonable resource and energy cost. In this article, we focus on the problem of efficiently and effectively securing IoT networks by carefully allocating security resources in the network area. In particular, given a set of security resources R and a set of attacks to be faced A, our method chooses the subset of R that best addresses the attacks in A, and the set of locations where to place them, that ensure the security coverage of all IoT devices at minimum cost and energy consumption. We model our problem according to game theory and provide a Pareto-optimal solution in which the cost of the security infrastructure, its energy consumption, and the probability of a successful attack are minimized. Our experimental evaluation shows that our technique improves the system robustness in terms of packet delivery rate for different network topologies. Furthermore, we also provide a method for handling the computation of the resource allocation plan for large-scale networks scenarios, where the optimization problem may require an unreasonable amount of time to be solved. We show how our proposed method drastically reduces the computing time, while providing a reasonable approximation of the optimal solution.",
"With the advancement of Internet of Things (IoT) technology and rapid growth of WSN applications, provides an opportunity to connect WSN to IoT, which results in the secure sensor data can be accessible via in secure Internet. The integration of WSN and IoT effects lots of security challenges and requires strict user authentication mechanism. Quite a few isolated user verification or authentication schemes using the password, the biometrics and the smart card have been proposed in the literature. In 2013, A.K designed a biometric-based remote user verification scheme using smart card for heterogeneous wireless sensor networks. A.K insisted that their scheme is secure against several known cryptographic attacks. Unfortunately, in this manuscript we will show that their scheme fails to resist replay attack, user impersonation attack, failure to accomplish mutual authentication and failure to provide data privacy.",
"Internet of Things (IoT) is a technology in which for any object the ability to send data via communications networks is provided. Ensuring the security of Internet services and applications is an important factor in attracting users to use this platform. In the other words, if people are unable to trust that the equipment and information will be reasonably safe against damage, abuse and the other security threats, this lack of trust leads to a reduction in the use of IoT-based applications. Recently, Tewari and Gupta (J Supercomput 1–18, 2016) have proposed an ultralightweight RFID authentication protocol to provide desired security for objects in IoT. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is ‘1’ while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We verify the correctness of the presented attack by simulation.",
"IoT devices are increasingly being implicated in cyber-attacks, raising community concern about the risks they pose to critical infrastructure, corporations, and citizens. In order to reduce this risk, the IETF is pushing IoT vendors to develop formal specifications of the intended purpose of their IoT devices, in the form of a Manufacturer Usage Description (MUD), so that their network behavior in any operating environment can be locked down and verified rigorously. This paper aims to assist IoT manufacturers in developing and verifying MUD profiles, while also helping adopters of these devices to ensure they are compatible with their organizational policies. Our first contribution is to develop a tool that takes the traffic trace of an arbitrary IoT device as input and automatically generates the MUD profile for it. We contribute our tool as open source, apply it to 28 consumer IoT devices, and highlight insights and challenges encountered in the process. Our second contribution is to apply a formal semantic framework that not only validates a given MUD profile for consistency, but also checks its compatibility with a given organizational policy. Finally, we apply our framework to representative organizations and selected devices, to demonstrate how MUD can reduce the effort needed for IoT acceptance testing.",
"IoT devices are being widely deployed. Many of them are vulnerable due to insecure implementations and configuration. As a result, many networks already have vulnerable devices that are easy to compromise. This has led to a new category of malware specifically targeting IoT devices. Existing intrusion detection techniques are not effective in detecting compromised IoT devices given the massive scale of the problem in terms of the number of different manufacturers involved. In this paper, we present D \"IoT, a system for detecting compromised IoT devices effectively. In contrast to prior work, D \"IoT uses a novel self-learning approach to classify devices into device types and build for each of these normal communication profiles that can subsequently be used to detect anomalous deviations in communication patterns. D \"IoT is completely autonomous and can be trained in a distributed crowdsourced manner without requiring human intervention or labeled training data. Consequently, D \"IoT copes with the emergence of new device types as well as new attacks. By systematic experiments using more than 30 real-world IoT devices, we show that D \"IoT is effective (96 detection rate with 1 false alarms) and fast (<0.03 s.) at detecting devices compromised by the infamous Mirai malware."
]
} |
1907.11322 | 2966078947 | By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well. | Among different designing strategies of security protocols, attempts to design a secure ultralightweight protocol for constrained environment has a long (unsuccessful) history. Pioneer examples include SASI @cite_27 , RAPP @cite_1 , SLAP @cite_25 , LMAP @cite_10 and R @math AP @cite_11 and among the recent proposals is SecLAP @cite_15 , and many other broken protocols that have been compromised by the later third parties analysis @cite_23 @cite_13 @cite_5 @cite_3 @cite_17 @cite_19 @cite_28 @cite_20 . All those protocols tried to provide enough security only using few lightweight operations such as bitwise operations, e.g. logical AND, OR, XOR and rotation. However, the mentioned analysis have shown that it is not easy to design a strong protocol using cryptographically-weak components. | {
"cite_N": [
"@cite_13",
"@cite_28",
"@cite_1",
"@cite_17",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2093422627",
"1508967933",
"2057811891",
"2152926062",
"2011329056",
"2140805804",
"2234042337",
"2121957834",
"2114497629",
"2088665600",
"1926128235",
"145176944",
"2156217225",
"179510128"
],
"abstract": [
"One of the key problems in RFID is security and privacy. The implementation of authentication protocols is a flexible and effective way to solve this problem. This letter proposes a new ultralightweight RFID authentication protocol with permutation (RAPP). RAPP avoids using unbalanced OR and AND operations and introduces a new operation named permutation. The tags only involve three operations: bitwise XOR, left rotation and permutation. In addition, unlike other existing ultralightweight protocols, the last messages exchanged in RAPP are sent by the reader so as to resist de-synchronization attacks. Security analysis shows that RAPP achieves the functionalities of the authentication protocol and is resistant to various attacks. Performance evaluation illustrates that RAPP uses fewer resources on tags in terms of computation operation, storage requirement and communication cost.",
"Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.",
"Protocols for secure computation enable mutually distrustful parties to jointly compute on their private inputs without revealing anything but the result. Over recent years, secure computation has become practical and considerable effort has been made to make it more and more efficient. A highly important tool in the design of two-party protocols is Yao's garbled circuit construction (Yao 1986), and multiple optimizations on this primitive have led to performance improvements of orders of magnitude over the last years. However, many of these improvements come at the price of making very strong assumptions on the underlying cryptographic primitives being used (e.g., that AES is secure for related keys, that it is circular secure, and even that it behaves like a random permutation when keyed with a public fixed key). The justification behind making these strong assumptions has been that otherwise it is not possible to achieve fast garbling and thus fast secure computation. In this paper, we take a step back and examine whether it is really the case that such strong assumptions are needed. We provide new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function. Our results show that in many cases, the penalty incurred is not significant, and so a more conservative approach to the assumptions being used can be adopted.",
"We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).",
"Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization. We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization.",
"We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an n-bit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetric-key primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when k ≤ n 2, and require poor parameters even when n 2 On the other hand, for arbitrary values of k, we design a communication efficient two-round (challenge-response) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [32], which requires Θ(λ + log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing \"non-malleable\" seeded randomness extractors -- if an attacker sees a random seed X and comes up with an arbitrarily related seed X', then we bound the relationship between R= Ext(W;X) and R' = Ext(W;X'). We also extend our two-round key agreement protocol to the \"fuzzy\" setting, where Alice and Bob share \"close\" (but not equal) secrets WA and WB, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge.",
"In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios.",
"Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states @math and @math can contribute to key generation, and the third state @math is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that these QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.",
"We formalize the Dolev-Yao model of security protocols, using a notation based on multiset rewriting with existentials. The goals are to provide a simple formal notation for describing security protocols, to formalize the assumptions of the Dolev-Yao model using this notation, and to analyze the complexity of the secrecy problem under various restrictions. We prove that, even for the case where we restrict the size of messages and the depth of message encryption, the secrecy problem is undecidable for the case of an unrestricted number of protocol roles and an unbounded number of new nonces. We also identify several decidable classes, including a DEXP-complete class when the number of nonces is restricted, and an NP-complete class when both the number of nonces and the number of roles is restricted. We point out a remaining open complexity problem, and discuss the implications these results have on the general topic of protocol analysis.",
"In the unconditionally reliable message transmission (URMT) problem, two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed graph. S has a message that he wishes to send to R; the challenge is to design a protocol such that after exchanging messages as per the protocol, the receiver R should correctly obtain S's message with arbitrarily small error probability Δ, in spite of the influence of a Byzantine adversary that may actively corrupt up to t nodes in the network (we denote such a URMT protocol as (t, (1 - Δ))-reliable). While it is known that (2t + 1) vertex disjoint directed paths from S to R are necessary and sufficient for (t, 1)-reliable URMT (that is with zero error probability), we prove that a strictly weaker condition, which we define and denote as (2t, t)-special-connectivity, together with just (t+1) vertex disjoint directed paths from S to R, is necessary and sufficient for (t, (1' - Δ))-reliable URMT with arbitrarily small (but non-zero) error probability, Δ. Thus, we demonstrate the power of randomization in the context of reliable message transmission. In fact, for any positive integer k > 0, we show that there always exists a digraph Gk such that (k, 1)-reliable URMT is impossible over Gk whereas there exists a (2k, (1 - Δ))-reliable URMT protocol, Δ > 0 in Gk. In a digraph G on which (t, (1 - Δ))-reliable URMT is possible, an edge is called critical if the deletion of that edge renders (t, (1 - Δ))-reliable URMT impossible. We give an example of a digraph G on n vertices such that G has Ω(n2) critical edges. This is quite baffling since no such graph exists for the case of perfect reliable message transmission (or equivalently (t, 1)-reliable URMT) or when the underlying graph is undirected. Such is the anomalous behavior of URMT protocols (when \"randomness meet directedness\") that it makes it extremely hard to design efficient protocols over arbitrary digraphs. However, if URMT is possible between every pair of vertices in the network, then we present efficient protocols for the same.",
"When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication.",
"We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks: Secure Positioning; and Position Based Key Exchange. We then show that these tasks are in fact universal in this setting --- we show how we can use them to realize Secure Multi-Party Computation.Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the \"right\" frameworks (there may be others) to study the foundations of position-based cryptography.",
"In this paper, we discuss symmetric-key and public-key protocols for key management in electricity transmission and distribution substations — both for communication within substations, and between substations and the network control center. Key management in the electricity network is widely regarded as a challenging problem, not only because of the scale, but also due to the fact that any mechanism must be implemented in resource-constrained environments. NISTIR 7628, the foundation document for the architecture of the US Smart Grid, mentions key management as one of the most important research areas, and the IEC 62351 standards committee has already initiated a new specification dedicated to key management. In this document, we describe different variants of symmetric-key and public-key protocols. Our design is motivated by the need to keep the mechanism simple, robust, usable and still cost effective. It is important to take into account the complexity and the costs involved not just in the initial bootstrapping of trust but also in subsequent key management operations like key update and revocation. It is vital to determine the complexity and the cost of recovery mechanisms — recovery not only from malicious, targeted attacks but also from unintentional failures. We present a detailed threat model, analysing a range of scenarios from physical intrusion through disloyal maintenance personnel to supply-chain attacks, network intrusions and attacks on central systems. We conclude that while using cryptography to secure wide area communication between the substation and the network control center brings noticeable benefits, the benefits of using cryptography within the substation bay are much less obvious; we expect that any such use will be essentially for compliance. The protocols presented in this paper are informed by this threat model and are optimised for robustness, including simplicity, usability and cost.",
"Protocol reverse engineering has often been a manual process that is considered time-consuming, tedious and error-prone. To address this limitation, a number of solutions have recently been proposed to allow for automatic protocol reverse engineering. Unfortunately, they are either limited in extracting protocol fields due to lack of program semantics in network traces or primitive in only revealing the flat structure of protocol format. In this paper, we present a system called AutoFormat that aims at not only extracting protocol fields with high accuracy, but also revealing the inherently “non-flat”, hierarchical structures of protocol messages. AutoFormat is based on the key insight that different protocol fields in the same message are typically handled in different execution contexts (e.g., the runtime call stack). As such, by monitoring the program execution, we can collect the execution context information for every message byte (annotated with its offset in the entire message) and cluster them to derive the protocol format. We have evaluated our system with more than 30 protocol messages from seven protocols, including two text-based protocols (HTTP and SIP), three binary-based protocols (DHCP, RIP, and OSPF), one hybrid protocol (CIFS SMB), as well as one unknown protocol used by a real-world malware. Our results show that AutoFormat can not only identify individual message fields automatically and with high accuracy (an average 93.4 match ratio compared with Wireshark), but also unveil the structure of the protocol format by revealing possible relations (e.g., sequential, parallel, and hierarchical) among the message fields. ∗Part of this research has been supported by the National Science Foundation under grants CNS-0716376 and CNS-0716444. The bulk of this work was performed when the first author was visiting George Mason University in Summer 2007."
]
} |
1907.11322 | 2966078947 | By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well. | In the line of designing ultralightweight protocols, in @cite_7 , Tewari and Gupta proposed a new ultralightweight authentication protocol for IoT and claimed that their protocol satisfies all security requirements. However, in @cite_19 , an efficient passive secret disclosure attack is applied to this protocol. Moreover, in @cite_4 , Wang cryptanalyzed the Tewari and Gupta protocol and also proposed an improved version of it. This protocol later analysed by Khor and Sidorov @cite_16 , where they also proposed an improved protocol following the same designing paradigm. In this paper, we consider the security of this improved protocol which has been proposed by Khor and Sidorov, and for simplicity, we call it KSP (stands for Khor and Sidorov protocol) and show that KSP is vulnerable to desynchronization attack and also against secret disclosure attack. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_4",
"@cite_7"
],
"mid": [
"1484207026",
"2732131698",
"2093422627",
"2082342720"
],
"abstract": [
"proposed a novel ultralightweight RFID mutual authentication protocol [1] that has recently been analyzed in several articles. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given in a previous analysis by We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag's secret rather than just 1bit as previously shown by , which increases the power of a traceability attack. Finally, we show how to extend the aforementioned attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the proposed attack by i.e., 192<<2 30. Copyright © 2013 John Wiley & Sons, Ltd.",
"Recently, Tewari and Gupta proposed a ultra-lightweight mutual authentication protocol in IoT environments for RFID tags. Their protocol aims to provide secure communication with least cost in both storage and computation. Unfortunately, in this paper, we exploit the vulnerability of this protocol. In this attack, an attacker can obtain the key shared between a back-end database server and a tag. We also explore the possibility in patching the system with some modifications.",
"One of the key problems in RFID is security and privacy. The implementation of authentication protocols is a flexible and effective way to solve this problem. This letter proposes a new ultralightweight RFID authentication protocol with permutation (RAPP). RAPP avoids using unbalanced OR and AND operations and introduces a new operation named permutation. The tags only involve three operations: bitwise XOR, left rotation and permutation. In addition, unlike other existing ultralightweight protocols, the last messages exchanged in RAPP are sent by the reader so as to resist de-synchronization attacks. Security analysis shows that RAPP achieves the functionalities of the authentication protocol and is resistant to various attacks. Performance evaluation illustrates that RAPP uses fewer resources on tags in terms of computation operation, storage requirement and communication cost.",
"RAPP (RFID Authentication Protocol with Permutation) is a recently proposed and efficient ultralightweight authentication protocol. Although it maintains the structure of the other existing ultralightweight protocols, the operation used in it is totally different due to the use of new introduced data dependent permutations and avoidance of modular arithmetic operations and biased logical operations such as AND and OR. The designers of RAPP claimed that this protocol resists against desynchronization attacks since the last messages of the protocol is sent by the reader and not by the tag. This letter challenges this assumption and shows that RAPP is vulnerable against desynchronization attack. This attack has a reasonable probability of success and is effective whether Hamming weight-based or modular-based rotations are used by the protocol."
]
} |
1907.11322 | 2966078947 | By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well. | As a new emerging technology, blockchain is believed to provide higher data protection, reliability, transparency, and lower management costs compared to a conventional centralized database. Hence, it could be a promising solution for large scale IoT systems. Targeting those benefits Sidorov recently proposed an ultralightweight mutual authentication RFID protocol for blockchain-enabled supply chains @cite_24 . Although they have claimed security against various attacks, we present an efficient secret disclosure attack on it. For the sake of simplicity, we call this protocol SOVNOKP . | {
"cite_N": [
"@cite_24"
],
"mid": [
"2907112888"
],
"abstract": [
"Previous research studies mostly focused on enhancing the security of radio frequency identification (RFID) protocols for various RFID applications that rely on a centralized database. However, blockchain technology is quickly emerging as a novel distributed and decentralized alternative that provides higher data protection, reliability, immutability, transparency, and lower management costs compared with a conventional centralized database. These properties make it extremely suitable for integration in a supply chain management system. In order to successfully fuse RFID and blockchain technologies together, a secure method of communication is required between the RFID tagged goods and the blockchain nodes. Therefore, this paper proposes a robust ultra-lightweight mutual authentication RFID protocol that works together with a decentralized database to create a secure blockchain-enabled supply chain management system. Detailed security analysis is performed to prove that the proposed protocol is secure from key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks. In addition to that, a formal analysis is conducted using Gong, Needham, and Yahalom logic and automated validation of internet security protocols and applications tool to verify the security of the proposed protocol. The protocol is proven to be efficient with respect to storage, computational, and communication costs. In addition to that, a further step is taken to ensure the robustness of the protocol by analyzing the probability of data collision written to the blockchain."
]
} |
1907.11481 | 2966777976 | Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope. | -- Code quality is multi-faceted and covers, for instance, testability, maintainability, and readability. To examine these aspects, certain quality attributes (e.g., coupling, complexity, size) are quantified by underlying software metrics; for instance, McCabe's software complexity metrics measure readability aspects of the code @cite_13 . For object-oriented systems, a popular set of metrics is the CK suite introduced by Chidamber and Kemerer @cite_43 and the QMOOD metrics (Quality Model for Object-Oriented Design) @cite_15 . Many approaches employ such metrics suites to distinguish parts of the source code in terms of good, acceptable, or bad quality @cite_64 @cite_44 or to identify code smells (problematic properties and anti-patterns of the code) @cite_51 . We also build on object-oriented metrics and use threshold-based approaches to analyze code quality and smells (see ). | {
"cite_N": [
"@cite_64",
"@cite_44",
"@cite_43",
"@cite_51",
"@cite_15",
"@cite_13"
],
"mid": [
"2121866145",
"2127623179",
"2107643286",
"2160538621",
"2561923752",
"2125121913"
],
"abstract": [
"This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than \"traditional\" code metrics, which can only be collected at a later phase of the software development processes.",
"There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian networks to determine the probabilistic influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, we define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We extract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, we learn the marginal defect proneness probability of the whole software system, the set of most effective metrics, and the influential relationships among metrics and defectiveness. Our experiments on nine open source Promise data repository data sets show that response for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most effective metrics whereas coupling between objects (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less effective metrics on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are untrustworthy. On the other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, we observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness. However, further investigation involving a greater number of projects is needed to confirm our findings.",
"To produce high quality object-oriented (OO) applications, a strong emphasis on design aspects, especially during the early phases of software development, is necessary. Design metrics play an important role in helping developers understand design aspects of software and, hence, improve software quality and developer productivity. In this paper, we provide empirical evidence supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects. Our results, based on industry data from software developed in two popular programming languages used in OO development, indicate that, even after controlling for the size of the software, these metrics are significantly associated with defects. In addition, we find that the effects of these metrics on defects vary across the samples from two programming languages-C++ and Java. We believe that these results have significant implications for designing high-quality software products using the OO approach.",
"Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness.",
"Code metric analysis is a well-known approach for assessing the quality of a software system. However, current tools and techniques do not take the system architecture (e.g., MVC, Android) into account. This means that all classes are assessed similarly, regardless of their specific responsibilities. In this paper, we propose SATT (Software Architecture Tailored Thresholds), an approach that detects whether an architectural role is considerably different from others in the system in terms of code metrics, and provides a specific threshold for that role. We evaluated our approach on 2 different architectures (MVC and Android) in more than 400 projects. We also interviewed 6 experts in order to explain why some architectural roles are different from others. Our results shows that SATT can overcome issues that traditional approaches have, especially when some architectural role presents very different metric values than others.",
"Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone."
]
} |
1907.11481 | 2966777976 | Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope. | -- Visualizations included into the lines or paragraphs of a text are known as @cite_56 , @cite_36 , or graphics @cite_12 . They allow close and coherent integration of the textual and visual representations of data. Some approaches apply these in the context of software engineering and embed them into the code to assist developers in understanding a program. @cite_7 and Sul ' @cite_45 suggest augmenting the source code with visualizations to keep track of the state and properties of the code. @cite_46 @cite_21 implement embedded visualizations for understanding program behavior and performance bottlenecks. Similarly, @cite_41 and @cite_33 augment source code with visualizations to aid understanding of runtime behavior. We embed visualizations into natural language text (not into source code) to support better understanding of the quality of the source code. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_41",
"@cite_21",
"@cite_56",
"@cite_45",
"@cite_46",
"@cite_12"
],
"mid": [
"2796075754",
"1995073788",
"2898193680",
"2108958711",
"2590534375",
"1898893341",
"2105057414",
"2012118336",
"2133333349"
],
"abstract": [
"Programmers must draw explicit connections between their code and runtime state to properly assess the correctness of their programs. However, debugging tools often decouple the program state from the source code and require explicitly invoked views to bridge the rift between program editing and program understanding. To unobtrusively reveal runtime behavior during both normal execution and debugging, we contribute techniques for visualizing program variables directly within the source code. We describe a design space and placement criteria for embedded visualizations. We evaluate our in situ visualizations in an editor for the Vega visualization grammar. Compared to a baseline development environment, novice Vega users improve their overall task grade by about 2 points when using the in situ visualizations and exhibit significant positive effects on their self-reported speed and accuracy.",
"We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations.",
"Abstract Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation. In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles. We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.",
"Finding and fixing performance bottlenecks requires sound knowledge of the program that is to be optimized. In this paper, we propose an approach for presenting performance-related information to software engineers by visually augmenting source code shown in an editor. Small diagrams at each method declaration and method call visualize the propagation of runtime consumption through the program as well as the interplay of threads in parallelized programs. Advantages of in situ visualization like this over traditional representations, where code and profiling information are shown in different places, promise to be the prevention of a split-attention effect caused by multiple views; information is presented where required, which supports understanding and navigation. We implemented the approach as an IDE plug-in and tested it in a user study with four developers improving the performance of their own programs. The user study provides insights into the process of understanding performance bottlenecks with our approach.",
"Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.",
"While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88 of the infographics and 71 of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.",
"Visualization is commonly used in data analysis to help the user in getting an initial idea about the raw data as well as visual representation of the regularities obtained in the analysis. In similar way, when we talk about automated text processing and the data consists of text documents, visualization of text document corpus can be very useful. From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. We describe an approach to visualization of text document collection based on methods from linear algebra. We apply Latent Semantic Indexing (LSI) as a technique that helps in extracting some of the background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. In this way visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.",
"Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.",
"During maintenance developers cannot read the entire code of large systems. They need a way to get a quick understanding of source code entities (such as, classes, methods, packages, etc.), so they can efficiently identify and then focus on the ones related to their task at hand. Sometimes reading just a method header or a class name does not tell enough about its purpose and meaning, while reading the entire implementation takes too long. We study a solution which mitigates the two approaches, i.e., short and accurate textual descriptions that illustrate the software entities without having to read the details of the implementation. We create such descriptions using techniques from automatic text summarization. The paper presents a study that investigates the suitability of various such techniques for generating source code summaries. The results indicate that a combination of text summarization techniques is most appropriate for source code summarization and that developers generally agree with the summaries produced."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.