aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1901.04982 | 2909529610 | Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 . | @cite_1 describe several scheduling algorithms for ISA-heterogeneous multiprocessors which share a common core ISA. They describe a system in which each task has a and where the scheduler prefers tasks whose task ISA ID closely matches the core's ISA. We apply a similar approach to software-defined heterogeneity on current Intel server processors. As our design is integrated into an existing scheduler with deadline-based priorities, we employ a slightly different priorization mechanism to prevent starvation of kernel tasks. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2127226195"
],
"abstract": [
"Heterogeneous MPSoC architectures can provide higher performance and flexibility with less power consumption and lower cost than homogeneous ones. However, as processor instruction sets of general heterogeneous MPSoCs are not identical, tasks migration between two heterogeneous processors is not possible. To enable this function, we propose to build one specific heterogeneous MPSoC platform in which all heterogeneous processors are based on the same core instruction set for the operating system realization. Different extended instructions can be added for different processors to improve the system performance. Tasks can be migrated from one processor to another only if the target processor has all instructions which can meet the execution requirement of this task. This paper concentrates on the infrastructure that is necessary to support the scheduling and migration of tasks between the processors. By using the Motion-JPEG case study, we confirm that our task migration framework can achieve higher processor usage rate and more flexibility."
]
} |
1901.04949 | 2963609467 | The Encoder-Decoder architecture is a main stream deep learning model for biomedical image segmentation. The encoder fully compresses the input and generates encoded features, and the decoder then produces dense predictions using encoded features. However, decoders are still under-explored in such architectures. In this paper, we comprehensively study the state-of-the-art Encoder-Decoder architectures, and propose a new universal decoder, called cascade decoder, to improve semantic segmentation accuracy. Our cascade decoder can be embedded into existing networks and trained altogether in an end-to-end fashion. The cascade decoder structure aims to conduct more effective decoding of hierarchically encoded features and is more compatible with common encoders than the known decoders. We replace the decoders of state-of-the-art models with our cascade decoder for several challenging biomedical image segmentation tasks, and the considerable improvements achieved demonstrate the efficacy of our new decoding method. | . A typical model-wise decoding structure is shown in Fig. (A). Its core idea is to treat the whole encoder as a blackbox" model, assuming that all the learned information is encoded into the output of the last layer of the encoder network. We call it model-wise" because the decoder takes the model-wise" output of the encoder model as the single input for decoding. Such structure has been widely used for segmentation tasks @cite_19 . Model-wise decoders focus more on the semantic context and may have trouble with segmenting fine details. We will show in Section that, for example, such decoders perform poorly in segmenting thin tissues (e.g., see the purple arrows in Fig. (F)). | {
"cite_N": [
"@cite_19"
],
"mid": [
"2518297742"
],
"abstract": [
"Accurate localization and segmentation of intervertebral discs (IVDs) from volumetric data is a pre-requisite for clinical diagnosis and treatment planning. With the advance of deep learning, 2D fully convolutional networks (FCN) have achieved state-of-the-art performance on 2D image segmentation related tasks. However, how to segment objects such as IVDs from volumetric data hasn’t been well addressed so far. In order to resolve above problem, we extend the 2D FCN into a 3D variant with end-to-end learning and inference, where voxel-wise predictions are generated. In order to compare the performance of 2D and 3D deep learning methods on volumetric segmentation, two different frameworks are studied: one is a 2D FCN with deep feature representations by making use of adjacent slices, the other one is a 3D FCN with flexible 3D convolutional kernels. We evaluated our methods on the 3D MRI data of MICCAI 2015 Challenge on Automatic Intervertebral Disc Localization and Segmentation. Extensive experimental results corroborated that 3D FCN can achieve a higher localization and segmentation accuracy than 2D FCN, which demonstrates the significance of volumetric information when confronting 3D localization and segmentation tasks."
]
} |
1901.04949 | 2963609467 | The Encoder-Decoder architecture is a main stream deep learning model for biomedical image segmentation. The encoder fully compresses the input and generates encoded features, and the decoder then produces dense predictions using encoded features. However, decoders are still under-explored in such architectures. In this paper, we comprehensively study the state-of-the-art Encoder-Decoder architectures, and propose a new universal decoder, called cascade decoder, to improve semantic segmentation accuracy. Our cascade decoder can be embedded into existing networks and trained altogether in an end-to-end fashion. The cascade decoder structure aims to conduct more effective decoding of hierarchically encoded features and is more compatible with common encoders than the known decoders. We replace the decoders of state-of-the-art models with our cascade decoder for several challenging biomedical image segmentation tasks, and the considerable improvements achieved demonstrate the efficacy of our new decoding method. | (1) Linear structure: The idea of this structure is to chain convolutions and pooling operations for feature extraction, such as in @cite_19 . (2) Residual structure: In such structure, the residual path element-wisely adds the input features to the output of the same block, making it a residual unit @cite_4 . This structure has been developed into various architectures @cite_13 @cite_6 . (3) Dense structure: The dense structure uses a densely connected path to concatenate the input features with the output features, allowing each layer to utilize raw information from all the previous layers @cite_17 . This structure has been widely used in FCN models @cite_11 @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_6",
"@cite_19",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2741891296",
"2194775991",
"",
"2518297742",
"2518214538",
"2963446712"
],
"abstract": [
"",
"Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"Accurate localization and segmentation of intervertebral discs (IVDs) from volumetric data is a pre-requisite for clinical diagnosis and treatment planning. With the advance of deep learning, 2D fully convolutional networks (FCN) have achieved state-of-the-art performance on 2D image segmentation related tasks. However, how to segment objects such as IVDs from volumetric data hasn’t been well addressed so far. In order to resolve above problem, we extend the 2D FCN into a 3D variant with end-to-end learning and inference, where voxel-wise predictions are generated. In order to compare the performance of 2D and 3D deep learning methods on volumetric segmentation, two different frameworks are studied: one is a 2D FCN with deep feature representations by making use of adjacent slices, the other one is a 3D FCN with flexible 3D convolutional kernels. We evaluated our methods on the 3D MRI data of MICCAI 2015 Challenge on Automatic Intervertebral Disc Localization and Segmentation. Extensive experimental results corroborated that 3D FCN can achieve a higher localization and segmentation accuracy than 2D FCN, which demonstrates the significance of volumetric information when confronting 3D localization and segmentation tasks.",
"Recently deep residual learning with residual units for training very deep neural networks advanced the state-of-the-art performance on 2D image recognition tasks, e.g., object detection and segmentation. However, how to fully leverage contextual representations for recognition tasks from volumetric data has not been well studied, especially in the field of medical image computing, where a majority of image modalities are in volumetric format. In this paper we explore the deep residual learning on the task of volumetric brain segmentation. There are at least two main contributions in our work. First, we propose a deep voxelwise residual network, referred as VoxResNet, which borrows the spirit of deep residual learning in 2D image recognition tasks, and is extended into a 3D variant for handling volumetric data. Second, an auto-context version of VoxResNet is proposed by seamlessly integrating the low-level image appearance features, implicit shape information and high-level context together for further improving the volumetric segmentation performance. Extensive experiments on the challenging benchmark of brain segmentation from magnetic resonance (MR) images corroborated the efficacy of our proposed method in dealing with volumetric data. We believe this work unravels the potential of 3D deep learning to advance the recognition performance on volumetric image segmentation.",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | Identifier renaming @cite_21 is an instance of layout obfuscation that removes relevant information from the code by changing the names of classes, fields and operations into meaningless identifiers, so as to make it harder for an attacker to guess the functionalities implemented by different parts of the application. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2123834080"
],
"abstract": [
"There exist several obfuscation tools for preventing Java bytecode from being decompiled. Most of these tools, simply scramble the names of the identifiers stored in a bytecode by substituting the identifiers with meaningless names. However, the scrambling technique cannot deter a determined cracker very long. We propose several advanced obfuscation techniques that make Java bytecode impossible to recompile or make the decompiled program difficult to understand and to recompile. The crux of our approach is to over use an identifier. That is, an identifier can denote several entities, such as types, fields, and methods, simultaneously. An additional benefit is that the size of the bytecode is reduced because fewer and shorter identifier names are used. Furthermore, we also propose several techniques to intentionally introduce syntactic and semantic errors into the decompiled program while preserving the original behaviors of the bytecode. Thus, the decompiled program would have to be debugged manually. Although our basic approach is to scramble the identifiers in Java bytecode, the scrambled bytecode produced with our techniques is much harder to crack than that produced with other identifier scrambling techniques. Furthermore, the run-time efficiency of the obfuscated bytecode is also improved because the size of the bytecode becomes smaller after obfuscation."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | Data obfuscation category of transforms targets data and data structures contained in the program. Using these transformations, data encoding can be changed @cite_13 , variables can be split or merged, and arrays can be split, folded, and merged. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2602097259"
],
"abstract": [
"Data obfuscations are program transformations used to complicate program understanding and conceal actual values of program variables. The possibility to hide constant values is a basic building block of several obfuscation techniques. For example, in XOR Masking a constant mask is used to encode data, but this mask must be hidden too, in order to keep the obfuscation resilient to attacks. In this paper, we present a novel technique based on the k-clique problem, which is known to be NP-complete, to generate opaque constants, i.e. values that are difficult to guess by static analysis. In our experimental assessment we show that our opaque constants are computationally cheap to generate, both at obfuscation time and at runtime. Moreover, due to the NP-completeness of the k-clique problem, our opaque constants can be proven to be hard to attack with state-of-the-art static analysis tools."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | The objective of control-flow obfuscation is to alter the flow of control within the code. Reordering statements, methods, loops and hiding the actual control flow behind irrelevant conditional statements classify as control-flow obfuscation transforms. Obfuscation based on Opaque predicates @cite_18 is a control-flow obfuscation that tries to hide the original behavior of an application by complicating the control flow with artificial branches. An opaque predicate is a conditional expression whose value is known to the obfuscator, but is hard to deduce statically by an attacker. An opaque predicated can be used in the condition of a newly generated if statement. One branch of the if statement contains the original application code, while the other contains a bogus version of it. Only the former branch will be executed, causing the semantics of the application to remain the same. In order to generate resilient opaque predicates, pointer aliasing can be used, since inter-procedural static alias analysis is known to be intractable. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2126851641"
],
"abstract": [
"It has become common to distribute software in forms that are isomorphic to the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks.In this paper we describe the design of a Java code obfuscator, a tool which - through the application of code transformations - converts a Java program into an equivalent one that is more difficult to reverse engineer.We describe a number of transformations which obfuscate control-flow. Transformations are evaluated with respect to potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), cost (How much time space overhead is added?), and stealth (How well does obfuscated code blend in with the original code?).The resilience of many control-altering transformations rely on the resilience of opaque predicates. These are boolean valued expressions whose values are known to the obfuscator but difficult to determine for an automatic deobfuscator. We show how to construct resilient, cheap, and stealthy opaque predicates based on the intractability of certain static analysis problems such as alias analysis."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | With the increasing adoption of Java as a programming language, the idea of translating Java bytecode to C was investigated @cite_2 , but mainly as a way avoid the overhead due to the Java interpreter, i.e. by turning the whole Java program into a single machine-dependent executable. In our approach, we instead keep the original Java program structure, and only selected portions are translated to C to turn analysis more difficult and to enable more efficient obfuscations. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2133165526"
],
"abstract": [
"The Java bytecode language is emerging as a software distribution standard. With major vendors committed to porting the Java run-time environment to their platforms, programs in Java bytecode are expected to run without modification on multiple platforms. These first generation run-time environments rely on an interpreter to bridge the gap between the bytecode instructions and the native hardware. This interpreter approach is sufficient for specialized applications such as Internet browsers where application performance is often limited by network delays rather than processor speed. It is, however, not sufficient for executing general applications distributed in Java bytecode. This paper presents our initial prototyping experience with Caffeine, an optimizing translator from Java bytecode to native machine code. We discuss the major technical issues involved in stack to register mapping, run-time memory structure mapping, and exception handlers. Encouraging initial results based on our X86 port are presented."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | Other obfuscation approaches that rely on code translation are based on an obfuscated Virtual Machine @cite_11 @cite_5 (OVM for short). Binary machine-dependent code is translated to a custom opcodes that can be interpreted by the OVM. (A portion of) the clear code is replaced by the corresponding custom opcodes and the OVM is appended to the program. When the obfuscated program is launched, the OVM takes control: it reads, decodes and executes the custom opcodes. Such OVM can make it much harder to reverse-engineer programs because standard disassemblers and standard tracing tools (e.g., debuggers) do not target the custom opcodes, and because the attackers are not familiar with them. | {
"cite_N": [
"@cite_5",
"@cite_11"
],
"mid": [
"2112243402",
"2064435654"
],
"abstract": [
"One of the most common forms of security attacks involves exploiting a vulnerability to inject malicious code into an executing application and then cause the injected code to be executed. A theoretically strong approach to defending against any type of code-injection attack is to create and use a process-specific instruction set that is created by a randomization algorithm. Code injected by an attacker who does not know the randomization key will be invalid for the randomized processor effectively thwarting the attack. This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation. The paper makes three contributions beyond previous work on ISR. First, we describe an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization. AES is generally believed to be impervious to known attack methodologies. Second, we demonstrate that ISR using AES can be implemented practically and efficiently (considering both execution time and code size overheads) without requiring special hardware support. The third contribution is that our approach detects malicious code before it is executed. Previous approaches relied on probabilistic arguments that execution of non-randomized foreign code would eventually cause a fault or runtime exception.",
"Despite huge efforts by software providers, software protection mechanisms are still broken on a regular basis. Due to the current distribution model, an attack against one copy of the software can be reused against any copy of the software. Diversity is an important tool to overcome this problem. It allows for renewable defenses in space, by giving every user a different copy, and renewable defenses in time when combined with tailored updates. This paper studies the possibilities and limitations of using virtualization to open a new set of opportunities to make diverse copies of a piece of software and to make individual copies more tamper-resistant. The performance impact is considerable and indicates that these techniques are best avoided in performance-critical parts of the code."
]
} |
1901.04942 | 2908618619 | Code obfuscation is a popular approach to turn program comprehension and analysis harder, with the aim of mitigating threats related to malicious reverse engineering and code tampering. However, programming languages that compile to high level bytecode (e.g., Java) can be obfuscated only to a limited extent. In fact, high level bytecode still contains high level relevant information that an attacker might exploit. In order to enable more resilient obfuscations, part of these programs might be implemented with programming languages (e.g., C) that compile to low level machine-dependent code. In fact, machine code contains and leaks less high level information and it enables more resilient obfuscations. In this paper, we present an approach to automatically translate critical sections of high level Java bytecode to C code, so that more effective obfuscations can be resorted to. Moreover, a developer can still work with a single programming language, i.e., Java. | This attack is possible, because the application itself is not dependent to the specific OVM. As a countermeasure, techniques have been proposed to inject such bindings @cite_16 . | {
"cite_N": [
"@cite_16"
],
"mid": [
"2124307915"
],
"abstract": [
"Process-level Virtual machines (PVMs) often play a crucial role in program protection. In particular, virtualization-based tools like VMProtect and CodeVirtualizer have been shown to provide desirable obfuscation properties (i.e., resistance to disassembly and code analysis). To be efficient, many tools cache frequently-executed code in a code cache. This code is run directly on hardware and consequently may be susceptible to unintended, malicious modification after it has been generated. To thwart such modifications, this work presents a novel methodology that imparts tamper detection at run time to PVM-protected applications. Our scheme centers around the run-time creation of a network of software knots (an instruction sequence that checksums portions of the code) to detect tamper. These knots are used to check the integrity of cached code, although our techniques could be applied to check any software-protection properties. Used in conjunction with established static techniques, our solution provides a mechanism for protecting PVM-generated code from modification. We have implemented a PVM system that automatically inserts code into an application to dynamically generate polymorphic software knots. Our experiments show that PVMs do indeed provide a suitable platform for extending guard protection, without the addition of high overheads to run-time performance and memory. Our evaluations demonstrate that these knots add less than 10 overhead while providing frequent integrity checks."
]
} |
1901.05219 | 2910436055 | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. | Doc2vec @cite_3 is a representative model that does not need sentence sequence information. Paragraph vectors-distributed bag of words (PV-DBOW) and paragraph vector-distributed memory (PV-DM), two distinct learning methods of Doc2vec, train sentence vectors based on the same objective: maximizing the probability to predict words constituting the sentence. The probability is defined as the dot product between a sentence vector and a word vector. PV-DM considers sequential information of words by employing a moving window. In this method, a sentence vector is learned to predict a word appearing after the moving window using the words within the window and the sentence vector. However, in the PV-DBOW method, words included in the window are arbitrarily selected. Therefore, this is incapable of reflecting sequential information of words in a sentence. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2131744502"
],
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks."
]
} |
1901.05219 | 2910436055 | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. | proposed a simple embedding model named SIF, which computes a sentence vector as a weighted average of fixed pre-trained word embedding vectors. Despite its simplicity, SIF accomplished improved performance in STS tasks and outperformed many complex models based on RNNs if word weights are properly adjusted. Sent2vec @cite_14 has a similar characteristic with SIF, which computes a sentence vector as a weighted average of word embedding vectors. However, Sent2vec trains not only the embedding vector of words that are unigram, but also that of n-gram tokens. It is different from SIF in that it employs n-gram embedding vectors to generate sentence vectors. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2605035112"
],
"abstract": [
"The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings."
]
} |
1901.05219 | 2910436055 | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. | Contrary to previously demonstrated sentence embedding models that exploit information from corpora to learn sentence vectors, C-PHRASE @cite_29 requires external information. C-PHRASE uses information from the syntactic parse tree of each sentence. This additional knowledge is included in the training objective of C-BOW @cite_31 . | {
"cite_N": [
"@cite_29",
"@cite_31"
],
"mid": [
"2251047310",
"2153579005"
],
"abstract": [
"We introduce C-PHRASE, a distributional semantic model that learns word representations by optimizing context prediction for phrases at all levels in a syntactic tree, from single words to full sentences. C-PHRASE outperforms the state-of-theart C-BOW model on a variety of lexical tasks. Moreover, since C-PHRASE word vectors are induced through a compositional learning objective (modeling the contexts of words combined into phrases), when they are summed, they produce sentence representations that rival those generated by ad-hoc compositional models.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1901.05219 | 2910436055 | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. | FastSent @cite_8 , similar to SkipThought, is a sentence embedding model aimed at predicting the surrounding sentences of a given sentence. FastSent learns the source word embedding @math and target word embedding @math . When three consecutive sentences @math , @math , @math are given, @math , the representation vector of @math , is calculated as the sum of the source word embedding vectors: Then, the cost function is simply defined as follows: In addition, also proposed a variant model (FastSent+AE) that predicts not only the adjacent sentences but also the center sentence @math . FastSent with such a simple structure , takes much less training time than SkipThought. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2271328876"
],
"abstract": [
"Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-linear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance."
]
} |
1901.05219 | 2910436055 | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. | InferSent @cite_0 is a sentence embedding model trained through supervised tasks. Inspired by previous researches in computer vision, where a large number of models are pre-trained through a classification task based on the ImageNet @cite_18 dataset, performed a research to determine the effectiveness of supervised tasks in the learning of a sentence embedding model in the field of NLP. Through experiments, concluded that a sentence embedding model having a bidirectional LSTM structure, trained on the SNLI dataset, yielded state-of-the-art performance in various NLP tasks. | {
"cite_N": [
"@cite_0",
"@cite_18"
],
"mid": [
"2612953412",
"2108598243"
],
"abstract": [
"Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond."
]
} |
1901.05061 | 2909169132 | In this paper we study deep learning-based music source separation, and explore using an alternative loss to the standard spectrogram pixel-level L2 loss for model training. Our main contribution is in demonstrating that adding a high-level feature loss term, extracted from the spectrograms using a VGG net, can improve separation quality vis-a-vis a pure pixel-level loss. We show this improvement in the context of the MMDenseNet, a State-of-the-Art deep learning model for this task, for the extraction of drums and vocal sounds from songs in the musdb18 database, covering a broad range of western music genres. We believe that this finding can be generalized and applied to broader machine learning-based systems in the audio domain. | To the best of our knowledge, there is no existing work on the application of such spectrogram feature losses to music source separation. The general idea of applying feature style reconstruction losses as proposed in @cite_4 for the visual domain, to an audio domain problem has been explored by some researchers, with mixed results. In @cite_3 , the authors propose an audio style transfer using, as one of the approaches, style reconstruction losses extracted using the VGG network, similar to @cite_4 . In their case, the VGG does not yield results of acceptable quality (as per subjective tests) but using a shallow CNN does. In @cite_1 , the authors explore audio generation as an audio style transfer problem, using similar loss terms. More generally, the idea of perceptual losses for audio is still an open area of research, where the task is to find loss measures that correlate better with subjective measures of audio quality. However, while the feature losses we explore in our work are derived from perceptual losses in the image domain, they are more directly a descriptor of visual patterns in audio spectrograms than being a perceptual descriptor of the underlying audio. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_3"
],
"mid": [
"2766465839",
"2331128040",
""
],
"abstract": [
"“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
""
]
} |
1901.04596 | 2909671697 | The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios. To address this challenge, unsupervised methods are strongly preferred for training neural networks without using any labeled data. In this paper, we present a novel paradigm of unsupervised representation learning by Auto-Encoding Transformation (AET) in contrast to the conventional Auto-Encoding Data (AED) approach. Given a randomly sampled transformation, AET seeks to predict it merely from the encoded features as accurately as possible at the output end. The idea is the following: as long as the unsupervised features successfully encode the essential information about the visual structures of original and transformed images, the transformation can be well predicted. We will show that this AET paradigm allows us to instantiate a large variety of transformations, from parameterized, to non-parameterized and GAN-induced ones. Our experiments show that AET greatly improves over existing unsupervised approaches, setting new state-of-the-art performances being greatly closer to the upper bounds by their fully supervised counterparts on CIFAR-10, ImageNet and Places datasets. | Generative Adversarial Nets. Besides the auto-encoders, Generative Adversarial Nets (GANs) become popular for training network representations of data in an unsupervised fashion. Unlike the auto-encoders, GANs attempt to directly generate data from noises drawn from a random distribution. By viewing the sampled noises as the coordinates over the manifold of real data, one can use them as the features to represent data. For this purpose, one usually needs to train a data encoder to find the noise that can generate the input images through the GAN generator. This can be implemented by jointly training a pair of mutually inverse generator and encoder @cite_19 @cite_6 . A prominent characteristic of GANs that make them different from auto-encoders is they do not rely on one-to-one reconstruction of input data at the output end. Instead, they focus on discovering and generating the entire distribution of data over the underlying manifold. Recent progress has shown the promising generalization ability of regularized GANs in generating unseen data based on the Lipschitz assumption on the real data distribution @cite_15 @cite_24 , and this shows great potential of GANs in providing expressive representation of images @cite_19 @cite_6 @cite_20 . | {
"cite_N": [
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_15",
"@cite_20"
],
"mid": [
"2411541852",
"",
"2412320034",
"2580360036",
"2894573160"
],
"abstract": [
"We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.",
"",
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.",
"The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized versus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real distribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake samples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake samples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically."
]
} |
1901.04604 | 2910098502 | State-of-the-art methods for image-to-image translation with Generative Adversarial Networks (GANs) can learn a mapping from one domain to another domain using unpaired image data. However, these methods require the training of one specific model for every pair of image domains, which limits the scalability in dealing with more than two image domains. In addition, the training stage of these methods has the common problem of model collapse that degrades the quality of the generated images. To tackle these issues, we propose a Dual Generator Generative Adversarial Network (G @math GAN), which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model. Moreover, we explore different optimization losses for better training of G @math GAN, and thus make unpaired image-to-image translation with higher consistency and better stability. Extensive experiments on six publicly available datasets with different scenarios, i.e., architectural buildings, seasons, landscape and human faces, demonstrate that the proposed G @math GAN achieves superior model capacity and better generation performance comparing with existing image-to-image translation GAN models. | @cite_20 are powerful generative models, which have achieved impressive results on different computer vision tasks, , image generation @cite_3 @cite_15 , editing @cite_49 @cite_22 and inpainting @cite_35 @cite_46 . However, GANs are difficult to train, since it is hard to keep the balance between the generator and the discriminator, which makes the optimization oscillate and thus leading to a collapse of the generator. To address this, several solutions have been proposed recently, such as Wasserstein GAN @cite_40 and Loss-Sensitive GAN @cite_27 . To generate more meaningful images, CGAN @cite_2 has been proposed to employ conditioned information to guide the image generation. Extra information can also be used such as discrete category labels @cite_8 @cite_13 , text descriptions @cite_31 @cite_34 , object face keypoints @cite_30 @cite_19 , human skeleton @cite_7 @cite_10 and referenced images @cite_32 @cite_41 . CGAN models have been successfully used in various applications, such as image editing @cite_8 , text-to-image translation @cite_31 and image-to-image translation @cite_32 tasks. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_41",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_8",
"@cite_49",
"@cite_46",
"@cite_7",
"@cite_32",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_34",
"@cite_31",
"@cite_13"
],
"mid": [
"2530372461",
"",
"2607170299",
"2896240508",
"",
"2125389028",
"2598591334",
"2964002510",
"2099471712",
"2552611751",
"2964144352",
"2479644247",
"2963361924",
"2963073614",
"2962692288",
"",
"2580360036",
"2405756170",
"2963143316",
"2739540493"
],
"abstract": [
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"",
"Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other — a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on in-the-wild images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.",
"Facial makeup transfer aims to translate the makeup style from a given reference makeup face image to another non-makeup one while preserving face identity. Such an instance-level transfer problem is more challenging than conventional domain-level transfer tasks, especially when paired data is unavailable. Makeup style is also different from global styles (e.g., paintings) in that it consists of several local styles cosmetics, including eye shadow, lipstick, foundation, and so on. Extracting and transferring such local and delicate makeup information is infeasible for existing style transfer methods. We address the issue by incorporating both global domain-level loss and local instance-level loss in an dual input output Generative Adversarial Network, called BeautyGAN. Specifically, the domain-level transfer is ensured by discriminators that distinguish generated images from domains' real samples. The instance-level loss is calculated by pixel-level histogram loss on separate local facial regions. We further introduce perceptual loss and cycle consistency loss to generate high quality faces and preserve identity. The overall objective function enables the network to learn translation on instance-level through unsupervised adversarial learning. We also build up a new makeup dataset that consists of 3834 high-resolution face images. Extensive experiments show that BeautyGAN could generate visually pleasant makeup faces and accurate transferring results. Data and code are available at http: liusi-group.com projects BeautyGAN.",
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.",
"In this paper we address the problem of generating person images conditioned on a given pose. Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose. In order to deal with pixel-to-pixel misalignments caused by the pose differences, we introduce deformable skip connections in the generator of our Generative Adversarial Network. Moreover, a nearest-neighbour loss is proposed instead of the common L1 and L2 losses in order to match the details of the generated image with the target image. We test our approach using photos of persons in different poses and we compare our method with previous work in this area showing state-of-the-art results in two benchmarks. Our method can be applied to the wider field of deformable object generation, provided that the pose of the articulated object can be extracted using a keypoint detector.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.",
"The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.",
"In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80 corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator G and a discriminator D, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of \"channel pollution\" while back-propagating the gradients. In addition, we present the Frechet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https: github.com Ha0Tang GestureGAN.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Each smile is unique: one person surely smiles in different ways (e.g. closing opening the eyes or mouth). Given one input image of a neutral face, can we generate multiple smile videos with distinctive characteristics? To tackle this one-to-many video generation problem, we propose a novel deep learning architecture named Conditional Multi-Mode Network (CMM-Net). To better encode the dynamics of facial expressions, CMM-Net explicitly exploits facial landmarks for generating smile sequences. Specifically, a variational auto-encoder is used to learn a facial landmark embedding. This single embedding is then exploited by a conditional recurrent network which generates a landmark embedding sequence conditioned on a specific expression (e.g. spontaneous smile). Next, the generated landmark embeddings are fed into a multi-mode recurrent landmark generator, producing a set of landmark sequences still associated to the given smile class but clearly distinct from each other. Finally, these landmark sequences are translated into face videos. Our experimental results demonstrate the effectiveness of our CMM-Net in generating realistic videos of multiple smile expressions.",
"",
"In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"Abstract: Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset.",
"Despite the promising results on paired unpaired image-to-image translation achieved by Generative Adversarial Networks (GANs), prior works often only transfer the low-level information (e.g. color or texture changes), but fail to manipulate high-level semantic meanings (e.g., geometric structure or content) of different object regions. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, aiming at modifying the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow ( )sheep, motor ( )bicycle, cat ( )dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective which is able to perform all types of semantic translations with one category-conditional generator. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Extensive qualitative and quantitative experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs."
]
} |
1901.04604 | 2910098502 | State-of-the-art methods for image-to-image translation with Generative Adversarial Networks (GANs) can learn a mapping from one domain to another domain using unpaired image data. However, these methods require the training of one specific model for every pair of image domains, which limits the scalability in dealing with more than two image domains. In addition, the training stage of these methods has the common problem of model collapse that degrades the quality of the generated images. To tackle these issues, we propose a Dual Generator Generative Adversarial Network (G @math GAN), which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model. Moreover, we explore different optimization losses for better training of G @math GAN, and thus make unpaired image-to-image translation with higher consistency and better stability. Extensive experiments on six publicly available datasets with different scenarios, i.e., architectural buildings, seasons, landscape and human faces, demonstrate that the proposed G @math GAN achieves superior model capacity and better generation performance comparing with existing image-to-image translation GAN models. | CGAN models learn a translation between image inputs and image outputs using neutral networks. Isola al @cite_32 design the pix2pix framework which is a conditional framework using a CGAN to learn the mapping function. Based on pix2pix, Zhu al @cite_9 further present BicycleGAN which achieves multi-modal image-to-image translation using paired data. Similar ideas have also been applied to many other tasks, generating photographs from sketches @cite_47 . However, most of the models require paired training data, which are usually costly to obtain. | {
"cite_N": [
"@cite_9",
"@cite_47",
"@cite_32"
],
"mid": [
"2963330667",
"2560481159",
"2963073614"
],
"abstract": [
"Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either."
]
} |
1901.04604 | 2910098502 | State-of-the-art methods for image-to-image translation with Generative Adversarial Networks (GANs) can learn a mapping from one domain to another domain using unpaired image data. However, these methods require the training of one specific model for every pair of image domains, which limits the scalability in dealing with more than two image domains. In addition, the training stage of these methods has the common problem of model collapse that degrades the quality of the generated images. To tackle these issues, we propose a Dual Generator Generative Adversarial Network (G @math GAN), which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model. Moreover, we explore different optimization losses for better training of G @math GAN, and thus make unpaired image-to-image translation with higher consistency and better stability. Extensive experiments on six publicly available datasets with different scenarios, i.e., architectural buildings, seasons, landscape and human faces, demonstrate that the proposed G @math GAN achieves superior model capacity and better generation performance comparing with existing image-to-image translation GAN models. | To alleviate the issue of pairing training data, Zhu al @cite_42 introduce CycleGAN, which learns the mappings between two unpaired image domains without supervision with the aid of a cycle-consistency loss. Apart from CycleGAN, there are other variants proposed to tackle the problem. For instance, CoupledGAN @cite_21 uses a weight-sharing strategy to learn common representations across domains. Taigman al @cite_29 propose a Domain Transfer Network (DTN) which learns a generative function between one domain and another domain. Liu al @cite_45 extend the basic structure of GANs via combining the Variational Autoencoders (VAEs) and GANs. A novel DualGAN mechanism is demonstrated in @cite_17 , in which image translators are trained from two unlabeled image sets each representing an image domain. Kim al @cite_24 propose a method based on GANs that learns to discover relations between different domains. However, these models are only suitable in cross-domain translation problems. | {
"cite_N": [
"@cite_29",
"@cite_42",
"@cite_21",
"@cite_24",
"@cite_45",
"@cite_17"
],
"mid": [
"2962964479",
"2962793481",
"2963784072",
"2598581049",
"2962947361",
"2963444790"
],
"abstract": [
"In unsupervised domain mapping, the learner is given two unmatched datasets A and B. The goal is to learn a mapping GAB that translates a sample in A to the analog sample in B. Recent approaches have shown that when learning simultaneously both GAB and the inverse mapping GBA, convincing mappings are obtained. In this work, we present a method of learning GAB without learning GBA. This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint. Our entire code is made publicly available at https: github.com sagiebenaim DistanceGAN.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.",
"Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https: github.com mingyuliutw unit.",
"Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently [7, 8, 21, 12, 4, 18]. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation [23], we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data."
]
} |
1901.04604 | 2910098502 | State-of-the-art methods for image-to-image translation with Generative Adversarial Networks (GANs) can learn a mapping from one domain to another domain using unpaired image data. However, these methods require the training of one specific model for every pair of image domains, which limits the scalability in dealing with more than two image domains. In addition, the training stage of these methods has the common problem of model collapse that degrades the quality of the generated images. To tackle these issues, we propose a Dual Generator Generative Adversarial Network (G @math GAN), which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model. Moreover, we explore different optimization losses for better training of G @math GAN, and thus make unpaired image-to-image translation with higher consistency and better stability. Extensive experiments on six publicly available datasets with different scenarios, i.e., architectural buildings, seasons, landscape and human faces, demonstrate that the proposed G @math GAN achieves superior model capacity and better generation performance comparing with existing image-to-image translation GAN models. | There are only very few recent methods attempting to implement multi-modal image-to-image translation in an efficient way. Anoosheh al propose a ComboGAN model @cite_38 , which only needs to train @math generator discriminator pairs for @math different image domains. To further reduce the model complexity, Choi al introduce StarGAN @cite_25 , which has a single generator discriminator pair and is able to perform the task with a complexity of @math . Although the model complexity is low, jointly learning both the translation and reconstruction tasks with the same generator requires the sharing of all parameters, which increases the optimization complexity and reduces the generalization ability, thus leading to unsatisfactory generation performance. The proposed approach aims at obtaining a good balance between the model capacity and the generation quality. Along this research line, we propose a Dual Generator Generative Adversarial Network (G @math GAN), which achieves this target via using two task-specific generators and one discriminator. We also explore various optimization objectives to train better the model to produce more consistent and more stable results. | {
"cite_N": [
"@cite_38",
"@cite_25"
],
"mid": [
"2963344645",
"2963767194"
],
"abstract": [
"The past year alone has seen unprecedented leaps in the area of learning-based image translation, namely Cycle-GAN, by But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. And with two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by the time and resources required to process them. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains. We demonstrate its capabilities on a dataset of paintings by 14 different artists and on images of the four different seasons in the Alps. Note that 14 data groups would need (14 choose 2) = 91 different CycleGAN models: a total of 182 generator discriminator pairs; whereas our model requires only 14 generator discriminator pairs.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks."
]
} |
1901.04555 | 2910489255 | Previous attempts at music artist classification use frame level audio features which summarize frequency content within short intervals of time. Comparatively, more recent music information retrieval tasks take advantage of temporal structure in audio spectrograms using deep convolutional and recurrent models. This paper revisits artist classification with this new framework and empirically explores the impacts of incorporating temporal structure in the feature representation. To this end, an established classification architecture, a Convolutional Recurrent Neural Network (CRNN), is applied to the artist20 music artist identification dataset under a comprehensive set of conditions. These include audio clip length, which is a novel contribution in this work, and previously identified considerations such as dataset split and feature level. Our results improve upon baseline works, verify the influence of the producer effect on classification performance and demonstrate the trade-offs between audio length and training set size. The best performing model achieves an average F1 score of 0.937 across three independent trials which is a substantial improvement over the corresponding baseline under similar conditions. Additionally, to showcase the effectiveness of the CRNN's feature extraction capabilities, we visualize audio samples at the model's bottleneck layer demonstrating that learned representations segment into clusters belonging to their respective artists. | Advances in artist classification with machine learning are limited. Furthermore, while exploratory attempts exist @cite_4 , a comprehensive study with deep learning is still absent. Prior academic works are almost a decade old and employ traditional algorithms which do not work well with high-dimensional and sequential data. @cite_19 , for example, use a one second MFCC feature representation and a Support Vector Machine (SVM) classification model to achieve a best test accuracy of 50 Addressing this issue alongside the release of the dataset, Labrosa uses a full-covariance Gaussian classifier to establish an artist classification baseline @cite_20 with an album split. Using randomly sampled MFCC vectors from each artist, the model achieves 56 Mandel and Ellis @cite_11 study the impact of frame level versus song level evaluation using Gaussian Mixture Models (GMM) and SVMs. On the dataset, at the frame level, the GMM achieves comparable performance (54 All of the works discussed in this subsection are summarized in Table and used as baselines in this study. | {
"cite_N": [
"@cite_19",
"@cite_4",
"@cite_20",
"@cite_11"
],
"mid": [
"2107252339",
"",
"1832115024",
"1508121828"
],
"abstract": [
"In this paper we demonstrate the artist detection component of Minnowmatch, a machine listening and music retrieval engine. Minnowmatch (Mima) automatically determines various meta-data and makes classifications concerning a piece of audio using neural networks and support vector machines. The technologies developed in Minnowmatch may be used to create audio information retrieval systems, copyright protection devices, and recommendation agents. This paper concentrates on the artist or source detection component of Mima, which we show to classify a one-in-n artist space correctly 91 over a small song-set and 70 over a larger song set. We show that scaling problems using only neural networks for classification can be addressed with a pre-classification step of multiple support vector machines.",
"",
"Music audio classification has most often been addressed bymodeling the statistics of broad spectral features, which, by design, exclude pitch information and reflect mainly instrumentation. We investigate using instead beat-synchronous chroma features, designed to reflect melodic and harmonic content and be invariant to instrumentation. Chroma features are less informative for classes such as artist, but contain information that is almost entirely independent of the spectral features, and hence the two can be profitably combined: Using a simple Gaussian classifier on a 20-way pop music artist identification task, we achieve 54 accuracy with MFCCs, 30 with chroma vectors, and 57 by combining the two. All the data and Matlab code to obtain these results are available. 1",
"Searching and organizing growing digital music collections requires automatic classification of music. This paper describes a new system, tested on the task of artist identification, that uses support vector machines to classify songs based on features calculated over their entire lengths. Since support vector machines are exemplarbased classifiers, training on and classifying entire songs instead of short-time features makes intuitive sense. On a dataset of 1200 pop songs performed by 18 artists, we show that this classifier outperforms similar classifiers that use only SVMs or song-level features. We also show that the KL divergence between single Gaussians and Mahalanobis distance between MFCC statistics vectors perform comparably when classifiers are trained and tested on separate albums, but KL divergence outperforms Mahalanobis distance when trained and tested on songs from the same albums."
]
} |
1901.04555 | 2910489255 | Previous attempts at music artist classification use frame level audio features which summarize frequency content within short intervals of time. Comparatively, more recent music information retrieval tasks take advantage of temporal structure in audio spectrograms using deep convolutional and recurrent models. This paper revisits artist classification with this new framework and empirically explores the impacts of incorporating temporal structure in the feature representation. To this end, an established classification architecture, a Convolutional Recurrent Neural Network (CRNN), is applied to the artist20 music artist identification dataset under a comprehensive set of conditions. These include audio clip length, which is a novel contribution in this work, and previously identified considerations such as dataset split and feature level. Our results improve upon baseline works, verify the influence of the producer effect on classification performance and demonstrate the trade-offs between audio length and training set size. The best performing model achieves an average F1 score of 0.937 across three independent trials which is a substantial improvement over the corresponding baseline under similar conditions. Additionally, to showcase the effectiveness of the CRNN's feature extraction capabilities, we visualize audio samples at the model's bottleneck layer demonstrating that learned representations segment into clusters belonging to their respective artists. | Traditional MIR tasks relied heavily on MFCCs which extract frequency content in a short-window frame of audio; however, recent works are shifting towards spectrograms which take advantage of temporal structure and have shown better performance in classification tasks @cite_23 . A spectrogram is a representation of frequency content over time found by taking the squared magnitude of the short-time Fourier Transform (STFT) of a signal @cite_14 . The mathematical form of the discrete STFT is shown in Eq. , where @math and @math describe the input signal and window function @cite_13 . Figure illustrates how a spectrogram captures both frequency content and temporal variation for three second audio samples in the dataset. Although spectrograms can be difficult to interpret, simple audio features such as the presence of sound and its frequency range are readily identifiable. With familiarity, it is possible to extract more information from this representation such as identifying which instruments are being played @cite_7 . We hypothesize that the patterns across frequency and time also contain stylistic tendencies associated with an artist and thus deep learning architectures, such as Convolutional Neural Networks which excel at pattern recognition in two-dimensional data, should be able to learn them. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_7",
"@cite_23"
],
"mid": [
"2191779130",
"2063644176",
"2461724944",
"2414894569"
],
"abstract": [
"This document describes version 0.4.0 of librosa: a Python pack- age for audio and music signal processing. At a high level, librosa provides implementations of a variety of common functions used throughout the field of music information retrieval. In this document, a brief overview of the library's functionality is provided, along with explanations of the design goals, software development practices, and notational conventions.",
"The major time and frequency analysis methods that have been applied to music processing are traced and application areas described. Techniques are examined in the context of Cohen's class, facilitating comparison and the design of new approaches. A trumpet example illustrates most techniques. The impact of different analysis methods on pitch and timbre examination is shown. Analyses spanning Fourier series and transform, pitch synchronous analysis, heterodyne filter, short-time Fourier transform (STFT), phase vocoder, constant-Q and wavelet transforms, the Wigner (1932) distribution, and the modal distribution are all covered. The limitations of windowing methods and their reliance on steady-state assumptions and infinite duration sinusoids to define frequency and amplitude are detailed. The Wigner distribution, in contrast, uses the analytic signal to define instantaneous frequency and power parameters. The modal distribution is shown to be a linear transformation of the Wigner distribution optimized for estimating those parameters for a musical signal model. Application areas consider analysis, resynthesis, transcription, and visualization. The more stringent requirements for time-frequency (TF) distributions in these applications are compared with the weaker requirements found in speech analysis and highlight the need for further theoretical research.",
"Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.",
"We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs). We evaluate different architectures consisting of 2D convolutional layers and subsampling layers only. In the experiments, we measure the AUC-ROC scores of the architectures with different complexities and input types using the MagnaTagATune dataset, where a 4-layer architecture shows state-of-the-art performance with mel-spectrogram input. Furthermore, we evaluated the performances of the architectures with varying the number of layers on a larger dataset (Million Song Dataset), and found that deeper models outperformed the 4-layer architecture. The experiments show that mel-spectrogram is an effective time-frequency representation for automatic tagging and that more complex models benefit from more training data."
]
} |
1901.04555 | 2910489255 | Previous attempts at music artist classification use frame level audio features which summarize frequency content within short intervals of time. Comparatively, more recent music information retrieval tasks take advantage of temporal structure in audio spectrograms using deep convolutional and recurrent models. This paper revisits artist classification with this new framework and empirically explores the impacts of incorporating temporal structure in the feature representation. To this end, an established classification architecture, a Convolutional Recurrent Neural Network (CRNN), is applied to the artist20 music artist identification dataset under a comprehensive set of conditions. These include audio clip length, which is a novel contribution in this work, and previously identified considerations such as dataset split and feature level. Our results improve upon baseline works, verify the influence of the producer effect on classification performance and demonstrate the trade-offs between audio length and training set size. The best performing model achieves an average F1 score of 0.937 across three independent trials which is a substantial improvement over the corresponding baseline under similar conditions. Additionally, to showcase the effectiveness of the CRNN's feature extraction capabilities, we visualize audio samples at the model's bottleneck layer demonstrating that learned representations segment into clusters belonging to their respective artists. | Since their early successes on ImageNet @cite_2 , CNNs have become a standard in deep learning when working with visual data. In a prior work, @cite_7 discuss how convolution can be used in MIR tasks; notably, they demonstrate that the layers in a CNN act as feature extractors. Generally, low-level layers describe sound onsets or the presence of individual instruments while high-level layers describe abstract patterns. This is consistent with established work @cite_10 which suggests that deep convolutional layers are a composition of lower-level features. Recurrent Neural Networks (RNN) have also had success with audio tasks such as speech recognition @cite_32 because sound is sequential along the time axis. In a follow-up study, @cite_34 show that including a recurrent unit to summarize temporal structure following 2D convolution, dubbed Convolutional Recurrent Neural Network, achieves the best performance in genre classification among four well-known audio classification architectures. In this work, we adapt the CRNN model to establish a deep learning baseline for artist classification. | {
"cite_N": [
"@cite_7",
"@cite_32",
"@cite_2",
"@cite_34",
"@cite_10"
],
"mid": [
"2461724944",
"2143612262",
"2163605009",
"2963451564",
"2150341604"
],
"abstract": [
"Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.",
"Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"We introduce a convolutional recurrent neural network (CRNN) for music tagging. CRNNs take advantage of convolutional neural networks (CNNs) for local feature extraction and recurrent neural networks for temporal summarisation of the extracted features. We compare CRNN with three CNN structures that have been used for music tagging while controlling the number of parameters with respect to their performance and training time per sample. Overall, we found that CRNNs show a strong performance with respect to the number of parameter and training time, indicating the effectiveness of its hybrid structure in music feature extraction and feature summarisation.",
"This monograph provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. The application areas are chosen with the following three criteria in mind: (1) expertise or knowledge of the authors; (2) the application areas that have already been transformed by the successful use of deep learning technology, such as speech recognition and computer vision; and (3) the application areas that have the potential to be impacted significantly by deep learning and that have been experiencing research growth, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning."
]
} |
1901.04846 | 2910937629 | Soil texture is important for many environmental processes. In this paper, we study the classification of soil texture based on hyperspectral data. We develop and implement three 1-dimensional (1D) convolutional neural networks (CNN): the LucasCNN, the LucasResNet which contains an identity block as residual network, and the LucasCoordConv with an additional coordinates layer. Furthermore, we modify two existing 1D CNN approaches for the presented classification task. The code of all five CNN approaches is available on GitHub (Riese, 2019). We evaluate the performance of the CNN approaches and compare them to a random forest classifier. Thereby, we rely on the freely available LUCAS topsoil dataset. The CNN approach with the least depth turns out to be the best performing classifier. The LucasCoordConv achieves the best performance regarding the average accuracy. In future work, we can further enhance the introduced LucasCNN, LucasResNet and LucasCoordConv and include additional variables of the rich LUCAS dataset. | In this section, we briefly review the published research which is related to the presented classification of soil texture based on hyperspectral data. A first review of geological remote sensing is given by @cite_7 . Traditional approaches like nearest mean, nearest neighbor, maximum likelihood, hidden Markov models and spectral angle matching for the classification of soil texture show acceptable results . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2005353255"
],
"abstract": [
"Abstract Improvements in optical remote sensing spectral resolution and increased data volumes necessitates the development of improved techniques for quantitative geological analysis. Laboratory spectral studies indicate that absorption band positions, depths and widths are correlated with diagnostic physicochemical mineral properties such as composition and abundance. Most current analytical techniques are incapable of providing comprehensive quantitative analysis of hyperspectral geological remote sensing data. Factors which must be considered for hyperspectral remote sensing campaigns include spectral resolution, analytical technique, band pass positions and spatial resolution. In many cases the volume of data required to address specific issues can be reduced through intelligent selection of band passes and analytical techniques."
]
} |
1901.04846 | 2910937629 | Soil texture is important for many environmental processes. In this paper, we study the classification of soil texture based on hyperspectral data. We develop and implement three 1-dimensional (1D) convolutional neural networks (CNN): the LucasCNN, the LucasResNet which contains an identity block as residual network, and the LucasCoordConv with an additional coordinates layer. Furthermore, we modify two existing 1D CNN approaches for the presented classification task. The code of all five CNN approaches is available on GitHub (Riese, 2019). We evaluate the performance of the CNN approaches and compare them to a random forest classifier. Thereby, we rely on the freely available LUCAS topsoil dataset. The CNN approach with the least depth turns out to be the best performing classifier. The LucasCoordConv achieves the best performance regarding the average accuracy. In future work, we can further enhance the introduced LucasCNN, LucasResNet and LucasCoordConv and include additional variables of the rich LUCAS dataset. | The increasing popularity of deep learning approaches in many research disciplines has also reached the field of remote sensing. Deep learning approaches turn out to solve classification tasks better than shallow methods . @cite_8 give a detailed overview of deep learning in remote sensing and @cite_15 review the application of deep learning in hyperspectral image analysis. The application of 2-dimensional CNNs for classification and regression tasks based on hyperspectral images is proposed among others by @cite_0 . The two dimensions refer to the two spatial dimensions of hyperspectral images. Since hyperspectral images consist of several spectral channels, one additional dimension is possible: the spectral dimension. This spectral dimension can be utilized as a third dimension of a CNN or can be analyzed on its own by 1-dimensional (1D) CNNs. @cite_9 propose the use of 1D CNNs based on the spectral dimension of hyperspectral images. This network is described in sec:methods in detail. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_15",
"@cite_8"
],
"mid": [
"1966580635",
"1521436688",
"2573524522",
"2782522152"
],
"abstract": [
"Spectral observations along the spectrum in many narrow spectral bands through hyperspectral imaging provides valuable information towards material and object recognition, which can be consider as a classification task. Most of the existing studies and research efforts are following the conventional pattern recognition paradigm, which is based on the construction of complex handcrafted features. However, it is rarely known which features are important for the problem at hand. In contrast to these approaches, we propose a deep learning based classification method that hierarchically constructs high-level features in an automated way. Our method exploits a Convolutional Neural Network to encode pixels' spectral and spatial information and a Multi-Layer Perceptron to conduct the classification task. Experimental results and quantitative validation on widely used datasets showcasing the potential of the developed approach for accurate hyperspectral data classification.",
"Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.",
"Deep learning is a rather new approach to machine learning that has achieved remarkable results in a large number of different image processing applications. Lately, application of deep learning to detect and classify spectral and spatio-spectral signatures in hyperspectral images has emerged. The high dimensionality of hyperspectral images and the limited amount of labelled training data makes deep learning an appealing approach for analysing hyperspectral data. Auto-Encoder can be used to learn a hierarchical feature representation using solely unlabelled data, the learnt representation can be combined with a logistic regression classifier to achieve results in-line with existing state-of-the-art methods. In this paper, we compare results between a set of available publications and find that deep learning perform in line with state-of-the-art on many data sets but little evidence exists that deep learning outperform the reference methods.",
"Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization."
]
} |
1901.04879 | 2910932954 | The popularity and projected growth of in-home smart speaker assistants, such as Amazon's Echo, has raised privacy concerns among consumers and privacy advocates. Notable questions regarding the collection and storage of user data by for-profit organizations include: what data is being collected and how is it being used, who has or can obtain access to such data, and how can user privacy be maintained while providing useful services. In addition to concerns regarding what the speaker manufacturer will do with your data, there are also more fundamental concerns about the security of these devices, third-party plugins, and the servers where they store recorded data. To address these privacy and security concerns, we introduce an intermediary device to provide an additional layer of security, which we call the or Smart 2 for short. By intelligently filtering sensitive conversations, and completely blocking this information from reaching a smart speaker's microphone(s), the Smart @math Speaker Blocker is an open-source, network-local (offline) smart device that provides users with decisive control over what data leaves their living room. | How smart speaker manufacturers evaluate external data requests, how they store user data, how user Terms of Service are specified and updated, and what information is included in transparency reports are all entirely dependent upon the manufacturer and vary between each one. With software services becoming increasingly interconnected through the incorporation of third-party applications, e.g. Alexa Skills, it becomes exponentially difficult for users to maintain an overview of their personal data. As suggested by , a legally binding set of industry standards ( a la IEEE) and certifications that apply to all smart speaker and IoT device manufacturers is necessary to resolve the existing segmentation and ambiguity of company policy and legislature @cite_20 . | {
"cite_N": [
"@cite_20"
],
"mid": [
"2899225656"
],
"abstract": [
"Smart speakers with voice assistants, like Amazon Echo and Google Home, provide benefits and convenience but also raise privacy concerns due to their continuously listening microphones. We studied people's reasons for and against adopting smart speakers, their privacy perceptions and concerns, and their privacy-seeking behaviors around smart speakers. We conducted a diary study and interviews with seventeen smart speaker users and interviews with seventeen non-users. We found that many non-users did not see the utility of smart speakers or did not trust speaker companies. In contrast, users express few privacy concerns, but their rationalizations indicate an incomplete understanding of privacy risks, a complicated trust relationship with speaker companies, and a reliance on the socio-technical context in which smart speakers reside. Users trade privacy for convenience with different levels of deliberation and privacy resignation. Privacy tensions arise between primary, secondary, and incidental users of smart speakers. Finally, current smart speaker privacy controls are rarely used, as they are not well-aligned with users' needs. Our findings can inform future smart speaker designs; in particular we recommend better integrating privacy controls into smart speaker interaction."
]
} |
1901.04787 | 2910090829 | Annotated datasets in different domains are critical for many supervised learning-based solutions to related problems and for the evaluation of the proposed solutions. Topics in natural language processing (NLP) similarly require annotated datasets to be used for such purposes. In this paper, we target at two NLP problems, named entity recognition and stance detection, and present the details of a tweet dataset in Turkish annotated for named entity and stance information. Within the course of the current study, both the named entity and stance annotations of the included tweets are made publicly available, although previously the dataset has been publicly shared with stance annotations only. We believe that this dataset will be useful for uncovering the possible relationships between named entity recognition and stance detection in tweets. | NER is a long-studied problem and its most common definition is the extraction of named entities such as person, location, and organization names from texts, in addition to some temporal and numeric expressions @cite_5 . Message Understanding Conference (MUC) series constitute an important milestone for NER research as they included NER competitions where related annotation guidelines, annotated datasets, and evaluation metrics were provided @cite_5 . The focus of recent NER studies has shifted from well-formed texts to social media texts including mostly microblog posts, as demonstrated in studies such as @cite_13 @cite_3 . Comprehensive surveys of NER research include @cite_18 @cite_12 @cite_2 . | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2020278455",
"2048613595",
"2807800730",
"160934661",
"2153848201",
"2070808142"
],
"abstract": [
"This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.",
"Two main challenges of Named Entity Recognition (NER) for tweets are the insufficient information in a tweet and the lack of training data. We propose a novel method consisting of three core elements: (1) normalization of tweets; (2) combination of a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields (CRF) model; and (3) semisupervised learning framework. The tweet normalization preprocessing corrects common ill-formed words using a global linear model. The KNN-based classifier conducts prelabeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. The semisupervised learning plus the gazetteers alleviate the lack of training data. Extensive experiments show the advantages of our method over the baselines as well as the effectiveness of normalization, KNN, and semisupervised learning.",
"Abstract Textual information is becoming available in abundance on the web, arising the requirement of techniques and tools to extract the meaningful information. One of such an important information extraction task is Named Entity Recognition and Classification. It is the problem of finding the members of various predetermined classes, such as person, organization, location, date time, quantities, numbers etc. The concept of named entity extraction was first proposed in Sixth Message Understanding Conference in 1996. Since then, a number of techniques have been developed by many researchers for extracting diversity of entities from different languages and genres of text. Still, there is a growing interest among research community to develop more new approaches to extract diverse named entities which are helpful in various natural language applications. Here we present a survey of developments and progresses made in Named Entity Recognition and Classification research.",
"",
"People tweet more than 100 Million times daily, yielding a noisy, informal, but sometimes informative corpus of 140-character messages that mirrors the zeitgeist in an unprecedented manner. The performance of standard NLP tools is severely degraded on tweets. This paper addresses this issue by re-building the NLP pipeline beginning with part-of-speech tagging, through chunking, to named-entity recognition. Our novel T-ner system doubles F1 score compared with the Stanford NER system. T-ner leverages the redundancy inherent in tweets to achieve this performance, using LabeledLDA to exploit Freebase dictionaries as a source of distant supervision. LabeledLDA outperforms co-training, increasing F1 by 25 over ten common entity types. Our NLP tools are available at: http: github.com aritter twitter_nlp",
"Abstract Named Entity Recognition serves as the basis for many other areas in Information Management. However, it is unclear what the meaning of Named Entity is, and yet there is a general belief that Named Entity Recognition is a solved task. In this paper we analyze the evolution of the field from a theoretical and practical point of view. We argue that the task is actually far from solved and show the consequences for the development and evaluation of tools. We discuss topics for further research with the goal of bringing the task back to the research scenario."
]
} |
1901.04787 | 2910090829 | Annotated datasets in different domains are critical for many supervised learning-based solutions to related problems and for the evaluation of the proposed solutions. Topics in natural language processing (NLP) similarly require annotated datasets to be used for such purposes. In this paper, we target at two NLP problems, named entity recognition and stance detection, and present the details of a tweet dataset in Turkish annotated for named entity and stance information. Within the course of the current study, both the named entity and stance annotations of the included tweets are made publicly available, although previously the dataset has been publicly shared with stance annotations only. We believe that this dataset will be useful for uncovering the possible relationships between named entity recognition and stance detection in tweets. | Stance detection is a newly-emerging topic in NLP research, with an increasing number of on-topic studies conducted in recent years. It is usually defined as the automatic detection of whether the owner of a given piece of text is in favor of or against a given target. Therefore, a classification result as , , or is generally expected at the end of the stance detection procedure. Similar to NER, earlier stance detection studies are performed on well-formed texts like online debate posts @cite_0 and essays @cite_15 and recent studies are mostly proposed for social media texts like tweets @cite_9 @cite_17 @cite_10 . | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2347127863",
"2145327091",
"1854537555",
"2792410075",
"2786918876"
],
"abstract": [
"We can often detect from a person’s utterances whether he or she is in favor of or against a given target entity—one’s stance toward the target. However, a person may express the same stance toward a target by using negative or positive language. Here for the first time we present a dataset of tweet–target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that although knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification.",
"A growing body of work has highlighted the challenges of identifying the stance a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts across 14 topics on ConvinceMe.net, ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for identifing rebuttals with 63 accuracy, and for identifying stance on a per topic basis that range from 54 to 69 , as compared to unigram baselines that vary between 49 and 60 . Our results suggest that methods that take into account the dialogic context of such posts might be fruitful.",
"We present a new approach to the automated classification of document-level argument stance, a relatively under-researched sub-task of Sentiment Analysis. In place of the noisy online debate data currently used in stance classification research, a corpus of student essays annotated for essay-level stance is constructed for use in a series of classification experiments. A novel set of features designed to capture the stance, stance targets, and topical relationships between the essay prompt and the student's essay is described. Models trained on this feature set showed significant increases in accuracy relative to two high baselines.",
"Stance detection is a subproblem of sentiment analysis where the stance of the author of a piece of natural language text for a particular target (either explicitly stated in the text or not) is explored. The stance output is usually given as Favor, Against, or Neither. In this paper, we target at stance detection on sports-related tweets and present the performance results of our SVM-based stance classifiers on such tweets. First, we describe three versions of our proprietary tweet data set annotated with stance information, all of which are made publicly available for research purposes. Next, we evaluate SVM classifiers using different feature sets for stance detection on this data set. The employed features are based on unigrams, bigrams, hashtags, external links, emoticons, and lastly, named entities. The results indicate that joint use of the features based on unigrams, hashtags, and named entities by SVM classifiers is a plausible approach for stance detection problem on sports-related tweets.",
"The task of stance detection is to determine whether someone is in favor or against a certain topic. A person may express the same stance towards a topic using positive or negative words. In this paper, several features and classifiers are explored to find out the combination that yields the best performance for stance detection. Due to the large number of features, ReliefF feature selection method was used to reduce the large dimensional feature space and improve the generalization capabilities. Experimental analyses were performed on five datasets, and the obtained results revealed that a majority vote classifier of the three classifiers: Random Forest, linear SVM and Gaussian Naive Bayes classifiers can be adopted for stance detection task."
]
} |
1901.04678 | 2909165485 | Many real-life dynamical systems change abruptly followed by almost stationary periods. In this paper, we consider streams of data with such abrupt behavior and investigate the problem of tracking their statistical properties in an online manner. We devise a tracking procedure where an estimator that is suitable for a stationary environment is combined together with an event detection method such that the estimator rapidly can jump to a more suitable value if an event is detected. Combining an estimation procedure with detection procedure is commonly known idea in the literature. However, our contribution lies in building the detection procedure based on the difference between the stationary estimator and a Stochastic Learning Weak Estimator (SLWE). The SLWE estimator is known to be the state-of-the art approach to tracking properties of non-stationary environments and thus should be a better choice to detect changes in abruptly changing environments than the far more common sliding window based approaches. To the best of our knowledge, the event detection procedure suggested by (2012) is the only procedure in the literature taking advantage of the powerful tracking properties of the SLWE estimator. The procedure in is however quite complex and not well founded theoretically compared to the procedures in this paper. In this paper, we focus on estimation procedure for the binomial and multinomial distributions, but our approach can be easily generalized to cover other distributions as well. Extensive simulation results based on both synthetic and real-life data related to news classification demonstrate that our estimation procedure is easy to tune and performs well. | When it comes to extensions of the sliding window, Koychev . proposed a new paradigm called Gradual Forgetting (GF) @cite_2 @cite_8 @cite_30 . According to the principles of GF, observations in the same window are treated unequally when computing the estimates based on weight assignment. Recent observations receive more weights than distant ones. Different forgetting functions were proposed ranging from linear @cite_32 to exponential @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_32",
"@cite_2",
"@cite_5"
],
"mid": [
"1545292741",
"1501483081",
"1481458213",
"",
"2133088989"
],
"abstract": [
"In recent years, many systems have been developed which aim at helping users to find pieces of information or other objects that are in accordance with their personal interests. In these systems, machine learning methods are often used to acquire the user interest profile. Frequently user interests drift with time. The ability to adapt fast to the current user's interests is an important feature for recommender systems. This paper presents a method for dealing with drifting interests by introducing the notion of gradual forgetting. Thus, the last observations should be more \"important\" for the learning algorithm than the old ones and the importance of an observation should decrease with time. The conducted experiments with a recommender system show that the gradual forgetting improves the ability to adapt to drifting user's interests. Experiments with the STAGGER problem provide additional evidences that gradual forgetting is able to improve the prediction accuracy on drifting concepts (incl. drifting user's interests).",
"This paper addresses the task of learning concept descriptions from streams of data. As new data are obtained the concept description has to be updated regularly to include the new data. In this case we can face the problem that the concept changes over time. Hence the old data become irrelevant to the current concept and have to be removed from the training dataset. This problem is known in the area of machine learning as concept drift. We develop a mechanism that tracks changing concepts using an adaptive time window. The method uses a significance test to detect concept drift and then optimizes the size of the time window, aiming to maximise the classification accuracy on recent data. The method presented is general in nature and can be used with any learning algorithm. The method is tested with three standard learning algorithms (kNN, ID3 and NBC). Three datasets have been used in these experiments. The experimental results provide evidence that the suggested forgetting mechanism is able significantly to improve predictive accuracy on changing concepts.",
"The paper presents a method for gradual forgetting, which is applied for learning drifting concepts. The approach suggests the introduction of a time-based forgetting function, which makes the last observations more significant for the learning algorithms than the old ones. The importance of examples decreases with time. Namely, the forgetting function provides each training example with a weight, according its appearance over time. The used learning algorithms are modified to be able to deal with weighted examples. Experiments are conducted with the STAGGER problem using NBC and ID3 algorithms. The results provide evidences that the utilization of gradual forgetting is able to improve the predictive accuracy on drifting concepts. The method was also implemented for a recommender system, which learns about user from observations. The results from experiments with this application show that the method is able to improve the system’s adaptability to drifting user’s interest.",
"",
"For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. This paper proposes several methods to handle such concept drifts with support vector machines. The methods either maintain an adaptive time window on the training data [13], select representative training examples, or weight the training examples [15]. The key idea is to automatically adjust the window size, the example selection, and the example weighting, respectively, so that the estimated generalization error is minimized. The approaches are both theoretically well-founded as well as effective and efficient in practice. Since they do not require complicated parameterization, they are simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on real-world text data compare the new methods with other window management approaches. We show that they can effectively select an appropriate window size, example selection, and example weighting, respectively, in a robust way. We also explain how the proposed example selection and weighting approaches can be turned into incremental approaches. Since most evaluation methods for machine learning, like e.g. cross-validation, assume that the examples are independent and identically distributed, which is clearly unrealistic in the case of concept drift, alternative evaluation schemes are used to estimate and optimize the performance of each learning step within the concept drift handling frameworks as well as to evaluate and compare the different frameworks."
]
} |
1901.04678 | 2909165485 | Many real-life dynamical systems change abruptly followed by almost stationary periods. In this paper, we consider streams of data with such abrupt behavior and investigate the problem of tracking their statistical properties in an online manner. We devise a tracking procedure where an estimator that is suitable for a stationary environment is combined together with an event detection method such that the estimator rapidly can jump to a more suitable value if an event is detected. Combining an estimation procedure with detection procedure is commonly known idea in the literature. However, our contribution lies in building the detection procedure based on the difference between the stationary estimator and a Stochastic Learning Weak Estimator (SLWE). The SLWE estimator is known to be the state-of-the art approach to tracking properties of non-stationary environments and thus should be a better choice to detect changes in abruptly changing environments than the far more common sliding window based approaches. To the best of our knowledge, the event detection procedure suggested by (2012) is the only procedure in the literature taking advantage of the powerful tracking properties of the SLWE estimator. The procedure in is however quite complex and not well founded theoretically compared to the procedures in this paper. In this paper, we focus on estimation procedure for the binomial and multinomial distributions, but our approach can be easily generalized to cover other distributions as well. Extensive simulation results based on both synthetic and real-life data related to news classification demonstrate that our estimation procedure is easy to tune and performs well. | In @cite_9 , Oommen and Rueda presented the SLWE to estimate the underlying parameters of time varying binomial multinomial distribution. The SLWE originally stems from the theory of variable structure Learning Automata @cite_17 , and more particularly, its reward-inaction flavor. The most appealing properties of the SLWE which makes it the state-of-the-art is its multiplicative form of updates. Two different counter-parts of SLWE @cite_9 for discretized spaces was recently proposed in @cite_14 and @cite_26 . In a similar manner to the SLWE, the latter solution also suffers from the problem of tuning the resolution parameter. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_26",
"@cite_17"
],
"mid": [
"2026158992",
"2400184102",
"2523331129",
"1538558539"
],
"abstract": [
"In this paper, we formally present a novel estimation method, referred to as the Stochastic Learning Weak Estimator (SLWE), which yields the estimate of the parameters of a binomial distribution, where the convergence of the estimate is weak, i.e. with regard to the first and second moments. The estimation is based on the principles of stochastic learning. The mean of the final estimate is independent of the scheme's learning coefficient, @l, and both the variance of the final distribution and the speed decrease with @l. Similar results are true for the multinomial case, except that the equations transform from being of a scalar type to be of a vector type. Amazingly enough, the speed of the latter only depends on the same parameter, @l, which turns out to be the only non-unity eigenvalue of the underlying stochastic matrix that determines the time-dependence of the estimates. An empirical analysis on synthetic data shows the advantages of the scheme for non-stationary distributions. The paper also briefly reports (without detailed explanation) conclusive results that demonstrate the superiority of SLWE in pattern-recognition-based data compression, where the underlying data distribution is non-stationary. Finally, and more importantly, the paper includes the results of two pattern recognition exercises, the first of which involves artificial data, and the second which involves the recognition of the types of data that are present in news reports of the Canadian Broadcasting Corporation (CBC). The superiority of the SLWE in both these cases is demonstrated.",
"The task of designing estimators that are able to track time-varying distributions has found promising applications in many real-life problems.Existing approaches resort to sliding windows that track changes by discarding old observations. In this paper, we report a novel estimator referred to as the Stochastic Discretized Weak Estimator (SDWE), that is based on the principles of discretized Learning Automata (LA). In brief, the estimator is able to estimate the parameters of a time varying binomial distribution using finite memory. The estimator tracks changes in the distribution by operating a controlled random walk in a discretized probability space. The steps of the estimator are discretized so that the updates are done in jumps, and thus the convergence speed is increased. Further, the state transitions are both state-dependent and randomized. As far as we know, such a scheme is both novel and pioneering. The results which have first been proven for binomial distributions have subsequently been extended for the multinomial case, using which they can be applied to any univariate distribution using a histogram-based scheme. The most outstanding and pioneering contribution of our work is that of achieving multinomial estimation without relying on a set of binomial estimators, and where the underlying strategy is truly randomized. Interestingly, the estimator possesses a low computational complexity that is independent of the number of parameters of the multinomial distribution. The generalization of these results for other distributions has also been alluded to. The paper briefly reports conclusive experimental results that prove the ability of the SDWE to cope with non-stationary environments with high adaptation and accuracy. HighlightsWe present finite-memory estimation techniques for non-stationary environments.These new techniques use the principles of discretized Learning Automata.The results have been tested for many time-varying problems and applications.",
"Generally speaking, research in the field of estimation involves designing strong estimators, i.e., those which converge with probability 1, as the number of samples increases indefinitely. But when the underlying distribution is nonstationary, one should rather seek for weak estimators, i.e., those which can unlearn when the distribution has changed. One such estimator, the so-called stochastic learning weak estimator (SLWE) was based on the principles of continuous stochastic learning automata (LA). A problem that has been unsolved has been that of designing such weak estimators in the context of systems with finite memory, which is what we investigate here. In this paper, we propose a new family of stochastic discretized weak estimators which can track time-varying binomial 1 distributions. As opposed to the SLWE, our proposed estimator is discretized, 2 i.e., the estimate can assume only a finite number of values. By virtue of discretization, our estimator realizes extremely fast adjustments of the running estimates by executing jumps, and it is thus able to robustly, and very quickly, track changes in the parameters of the distribution after a switch has occurred. The design principle of our strategy is based on a solution for the stochastic search on the line problem. In order to achieve efficient estimation, we have to first infer (or rather simulate) an Artificial Oracle which informs the LA whether to go right or left, which is then utilized to infer whether we are to increase the current estimate or to decrease it. This paper briefly reports pioneering and conclusive experimental results that demonstrate the ability of the proposed estimator to cope with nonstationary environments. 1 The extension to multinomial and to other distributions is straightforward.",
""
]
} |
1901.04678 | 2909165485 | Many real-life dynamical systems change abruptly followed by almost stationary periods. In this paper, we consider streams of data with such abrupt behavior and investigate the problem of tracking their statistical properties in an online manner. We devise a tracking procedure where an estimator that is suitable for a stationary environment is combined together with an event detection method such that the estimator rapidly can jump to a more suitable value if an event is detected. Combining an estimation procedure with detection procedure is commonly known idea in the literature. However, our contribution lies in building the detection procedure based on the difference between the stationary estimator and a Stochastic Learning Weak Estimator (SLWE). The SLWE estimator is known to be the state-of-the art approach to tracking properties of non-stationary environments and thus should be a better choice to detect changes in abruptly changing environments than the far more common sliding window based approaches. To the best of our knowledge, the event detection procedure suggested by (2012) is the only procedure in the literature taking advantage of the powerful tracking properties of the SLWE estimator. The procedure in is however quite complex and not well founded theoretically compared to the procedures in this paper. In this paper, we focus on estimation procedure for the binomial and multinomial distributions, but our approach can be easily generalized to cover other distributions as well. Extensive simulation results based on both synthetic and real-life data related to news classification demonstrate that our estimation procedure is easy to tune and performs well. | Gama . @cite_18 presents a clear distinction between memory management and forgetting mechanisms. Adaptive windowing @cite_27 works with the premise of growing the size the sliding window indefinitely until a change is detected via a change detection technique. In this situation, the size of the window is reset whenever a changed is detected. | {
"cite_N": [
"@cite_27",
"@cite_18"
],
"mid": [
"2022775778",
"2099419573"
],
"abstract": [
"On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"Concept drift primarily refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time. Assuming a general knowledge of supervised learning in this article, we characterize adaptive learning processes; categorize existing strategies for handling concept drift; overview the most representative, distinct, and popular techniques and algorithms; discuss evaluation methodology of adaptive algorithms; and present a set of illustrative applications. The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art. Thus, it aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners."
]
} |
1901.04678 | 2909165485 | Many real-life dynamical systems change abruptly followed by almost stationary periods. In this paper, we consider streams of data with such abrupt behavior and investigate the problem of tracking their statistical properties in an online manner. We devise a tracking procedure where an estimator that is suitable for a stationary environment is combined together with an event detection method such that the estimator rapidly can jump to a more suitable value if an event is detected. Combining an estimation procedure with detection procedure is commonly known idea in the literature. However, our contribution lies in building the detection procedure based on the difference between the stationary estimator and a Stochastic Learning Weak Estimator (SLWE). The SLWE estimator is known to be the state-of-the art approach to tracking properties of non-stationary environments and thus should be a better choice to detect changes in abruptly changing environments than the far more common sliding window based approaches. To the best of our knowledge, the event detection procedure suggested by (2012) is the only procedure in the literature taking advantage of the powerful tracking properties of the SLWE estimator. The procedure in is however quite complex and not well founded theoretically compared to the procedures in this paper. In this paper, we focus on estimation procedure for the binomial and multinomial distributions, but our approach can be easily generalized to cover other distributions as well. Extensive simulation results based on both synthetic and real-life data related to news classification demonstrate that our estimation procedure is easy to tune and performs well. | Another interesting family of approaches assume that the true value of the parameter being estimated is revealed after some delay, which enables quantifying the error of the estimator. In such settings, some research @cite_23 have used ensemble methods where the output of different estimators is combined using weighted majority voting. The weights of each estimator is adjusted based on its error. In this sense, estimation methods that produce high error see their weight decrease. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2148798622"
],
"abstract": [
"In the real world concepts are often not stable but change with time. A typical example of this in the biomedical context is antibiotic resistance, where pathogen sensitivity may change over time as new pathogen strains develop resistance to antibiotics that were previously effective. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques that treat arriving instances as equally important contributors to the final concept. The underlying data distribution may change as well, making previously built models useless. This is known as virtual concept drift. Both types of concept drifts make regular updates of the model necessary. Among the most popular and effective approaches to handle concept drift is ensemble learning, where a set of models built over different time periods is maintained and the best model is selected or the predictions of models are combined, usually according to their expertise level regarding the current concept. In this paper we propose the use of an ensemble integration technique that would help to better handle concept drift at an instance level. In dynamic integration of classifiers, each base classifier is given a weight proportional to its local accuracy with regard to the instance tested, and the best base classifier is selected, or the classifiers are integrated using weighted voting. Our experiments with synthetic data sets simulating abrupt and gradual concept drifts and with a real-world antibiotic resistance data set demonstrate that dynamic integration of classifiers built over small time intervals or fixed-sized data blocks can be significantly better than majority voting and weighted voting, which are currently the most commonly used integration techniques for handling concept drift with ensembles."
]
} |
1901.04678 | 2909165485 | Many real-life dynamical systems change abruptly followed by almost stationary periods. In this paper, we consider streams of data with such abrupt behavior and investigate the problem of tracking their statistical properties in an online manner. We devise a tracking procedure where an estimator that is suitable for a stationary environment is combined together with an event detection method such that the estimator rapidly can jump to a more suitable value if an event is detected. Combining an estimation procedure with detection procedure is commonly known idea in the literature. However, our contribution lies in building the detection procedure based on the difference between the stationary estimator and a Stochastic Learning Weak Estimator (SLWE). The SLWE estimator is known to be the state-of-the art approach to tracking properties of non-stationary environments and thus should be a better choice to detect changes in abruptly changing environments than the far more common sliding window based approaches. To the best of our knowledge, the event detection procedure suggested by (2012) is the only procedure in the literature taking advantage of the powerful tracking properties of the SLWE estimator. The procedure in is however quite complex and not well founded theoretically compared to the procedures in this paper. In this paper, we focus on estimation procedure for the binomial and multinomial distributions, but our approach can be easily generalized to cover other distributions as well. Extensive simulation results based on both synthetic and real-life data related to news classification demonstrate that our estimation procedure is easy to tune and performs well. | In the same perspective, the estimated error can be used for re-initializing the estimation as performed in @cite_21 . In all brevity, changes are detected based on comparing sections of data, using statistical analysis to detect distributional changes, i.e., abrupt or gradual changes in the mean of the data points when compared with a baseline mean with a random noise component. One option is also to keep a reference window and compare recent windows with the reference window to detect changes @cite_16 . This can, for example, be done based on comparing the probability distributions of the reference window and the recent window using Kullback-Leibler divergence @cite_34 @cite_20 . | {
"cite_N": [
"@cite_34",
"@cite_21",
"@cite_20",
"@cite_16"
],
"mid": [
"3487859",
"2069701377",
"1500307578",
"2047386129"
],
"abstract": [
"An ophthalmic lens for providing correction of distortion of the vertical position of an image in a lens prescribed for the condition of anisometropia comprises upper and lower portions having different optical centers. The lens surfaces of the upper and lower portions merge in a transverse band providing a smooth transition between the upper and lower lens surfaces over a vertical distance for example of 2 - 8 mm. The transverse band removes the conventional sharply visible boundary between the upper and lower lens surfaces and by introducing aberration into the lens at this point deters effective use of the lens over an angle of vision corresponding substantially to the angle of a cone of light entering the pupil of the eye and so prevents double vision occurring at the boundary of the lens surfaces. Methods of production of the lens are also disclosed.",
"Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time.",
"In this paper we study the problem of constructing histograms from high-speed time-changing data streams. Learning in this context requires the ability to process examples once at the rate they arrive, maintaining a histogram consistent with the most recent data, and forgetting out-date data whenever a change in the distribution is detected. To construct histogram from high-speed data streams we use the two layer structure used in the Partition Incremental Discretization (PiD) algorithm. Our contribution is a new method to detect whenever a change in the distribution generating examples occurs. The base idea consists of monitoring distributions from two different time windows: the reference time window, that reflects the distribution observed in the past; and the current time window reflecting the distribution observed in the most recent data. We compare both distributions and signal a change whenever they are greater than a threshold value, using three different methods: the Entropy Absolute Difference, the Kullback-Leibler divergence and the Cosine Distance. The experimental results suggest that Kullback-Leibler divergence exhibit high probability in change detection, faster detection rates, with few false positives alarms.",
"An established method to detect concept drift in data streams is to perform statistical hypothesis testing on the multivariate data in the stream. The statistical theory offers rank-based statistics for this task. However, these statistics depend on a fixed set of characteristics of the underlying distribution. Thus, they work well whenever the change in the underlying distribution affects the properties measured by the statistic, but they perform not very well, if the drift influences the characteristics caught by the test statistic only to a small degree. To address this problem, we show how uniform convergence bounds in learning theory can be adjusted for adaptive concept drift detection. In particular, we present three novel drift detection tests, whose test statistics are dynamically adapted to match the actual data at hand. The first one is based on a rank statistic on density estimates for a binary representation of the data, the second compares average margins of a linear classifier induced by the 1-norm support vector machine (SVM), and the last one is based on the average zero-one, sigmoid or stepwise linear error rate of an SVM classifier. We compare these new approaches with the maximum mean discrepancy method, the StreamKrimp system, and the multivariate Wald–Wolfowitz test. The results indicate that the new methods are able to detect concept drift reliably and that they perform favorably in a precision-recall analysis. Copyright © 2009 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 2: 311-327, 2009"
]
} |
1901.04636 | 2910887632 | The Search for Extra-terrestrial Intelligence (SETI) aims to find technological signals of extra-solar origin. Radio frequency SETI is characterized by large unlabeled datasets and complex interference environment. The infinite possibilities of potential signal types require generalizable signal processing techniques with little human supervision. We present a generative model of self-supervised deep learning that can be used for anomaly detection and spatial filtering. We develop and evaluate our approach on spectrograms containing narrowband signals collected by Breakthrough Listen at the Green Bank telescope. The proposed approach is not meant to replace current narrowband searches but to demonstrate the potential to generalize to other signal types. | Recurrent neural network (RNN) and long short-term memory (LSTM) models ( @cite_11 @cite_17 @cite_1 @cite_13 @cite_16 @cite_7 @cite_14 ) have driven many of the recent advances in predictive video generation. The LSTM encoder-decoder models proposed in works such as @cite_16 and @cite_13 provide a general framework for sequence-to-sequence learning problems by training two separate LSTM models, one to map input sequence to a vector of fixed dimension and another to extract output sequence from that vector, thereby predicting future sequences or reconstructing past sequences. Predictive anomaly detection in radio frequency data has challenges of high thermal noise and large variations of signal strength. Time domain anomaly detection has first been explored in @cite_19 . The authors used a recurrent network, and show effective detection of a range of synthetic anomalies. Most recently @cite_3 explores predictive anomaly detection in spectrogram and spectral density functions (SDF). There the authors experimented with simulated signals and anomalies and concluded that there was a greater probability of detection in the SDF domain than in spectrograms. | {
"cite_N": [
"@cite_11",
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2951183276",
"1810943226",
"2950635152",
"1568514080",
"2792082283",
"2547444416",
"2130942839",
"2952453038",
"2951805548"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.",
"Intrusion detection has become one of the most critical tasks in a wireless network to prevent service outages that can take long to fix. The sheer variety of anomalous events necessitates adopting cognitive anomaly detection methods instead of the traditional signature-based detection techniques. This paper proposes an anomaly detection methodology for wireless systems that is based on monitoring and analyzing radio frequency (RF) spectrum activities. Our detection technique leverages an existing solution for the video prediction problem, and uses it on image sequences generated from monitoring the wireless spectrum. The deep predictive coding network is trained with images corresponding to the normal behavior of the system, and whenever there is an anomaly, its detection is triggered by the deviation between the actual and predicted behavior. For our analysis, we use the images generated from the time-frequency spectrograms and spectral correlation functions of the received RF signal. We test our technique on a dataset which contains anomalies such as jamming, chirping of transmitters, spectrum hijacking, and node failure, and evaluate its performance using standard classifier metrics: detection ratio, and false alarm rate. Simulation results demonstrate that the proposed methodology effectively detects many unforeseen anomalous events in real time. We discuss the applications, which encompass industrial IoT, autonomous vehicle control and mission-critical communications services.",
"We introduce a powerful recurrent neural network based method for novelty detection to the application of detecting radio anomalies. This approach holds promise in significantly increasing the ability of naive anomaly detection to detect small anomalies in highly complex complexity multi-user radio bands. We demonstrate the efficacy of this approach on a number of common real over the air radio communications bands of interest and quantify detection performance in terms of probability of detection an false alarm rates across a range of interference to band power ratios and compare to baseline methods.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
} |
1907.06223 | 2956688917 | This manuscript presents a fast direct solution technique for solving two dimensional wave scattering problems from quasi-periodic multilayered structures. The fast solver is built from the linear system that results from the discretization of a boundary integral formulation that is robust at Wood's anomalies. When the interface geometries are complex, the linear system is too large to be handled via dense linear algebra. The key building block of the proposed solver is a fast direct direct solver for the large sparse block system that corresponds to the discretization of boundary integral equations. The solver makes use of hierarchical matrix inversion techniques, has a cost that scales linearly with respect to the number of unknowns on the interfaces and the precomputation can be used for all choices of boundary data. By partitioning the remainder of the precomputation into parts based on their dependence on incident angle, the proposed direct solver is efficient for problems involving many incident angles like those that arise in applications. For example for a problem on an eleven layer geometry where the solution is desired for 287 incident angles, the proposed solution technique is 87 times faster than building a new fast direct solver for each new incident angle. An additional feature of the proposed solution technique is that solving a problem where an interface or layer property is changed require an update in the precomputation that cost linearly with respect to the number points on the affected interfaces with a small constant prefactor. The efficiency for modified geometries and multiple solves make the solution technique well suited for optimal design and inverse scattering applications. | Direct discretization of ) is possible via finite difference or finite element methods @cite_13 but it faces two challenges, (i) meshing to the interfaces to maintain accuracy and (ii) enforcing the radiation condition. Meshing can be effectively be handled using mesh generation software such as GMESH @cite_40 . Techniques such as perfectly matched layers @cite_14 can artificially enforce radiation conditions but lose accuracy if the points per wavelength remains fixed (the so-called effect) @cite_20 . Another alternative method for directly discretizing ) is the rigorous-coupled wave analysis (RCWA) or Fourier Modal Method. It is designed for multilayer gratings @cite_33 which is dependent on an iterative solve. While a Fourier factorization method @cite_29 @cite_36 can be used to accelerate convergence of an iterative solver, this solution approach is not ideal for problems with many right hand sides that arise in applications. An additional challenge of RCWA is that it is difficult to apply to arbitrary shaped interfaces. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_36",
"@cite_29",
"@cite_40",
"@cite_13",
"@cite_20"
],
"mid": [
"2104498626",
"2118914178",
"1995367634",
"2060612859",
"2112311198",
"1591608973",
"2025124289"
],
"abstract": [
"SUMMARY The perfectly matched layer absorbing boundary condition has proven to be very efficient for the elastic wave equation written as a first-order system in velocity and stress. We demonstrate how to use this condition for the same equation written as a second-order system in displacement. This facilitates use in the context of numerical schemes based upon such a system, e.g. the finite-element method, the spectral-element method and some finite-difference methods. We illustrate the efficiency of this second-order perfectly matched layer based upon 2-D benchmarks with body and surface waves.",
"A rigorous coupled-wave approach is used to analyze diffraction by general planar gratings bounded by two different media. The grating fringes may have any orientation (slanted or unslanted) with respect to the grating surfaces. The analysis is based on a state-variables representation and results in a unifying, easily computer-implementable matrix formulation of the general planar-grating diffraction problem. Accurate diffraction characteristics are presented for the first time to the authors’ knowledge for general slanted gratings. This present rigorous formulation is compared with rigorous modal theory, approximate two-wave modal theory, approximate multiwave coupled-wave theory, and approximate two-wave coupled-wave theory. Typical errors in the diffraction characteristics introduced by these various approximate theories are evaluated for transmission, slanted, and reflection gratings. Inclusion of higher-order waves in a theory is important for obtaining accurate predictions when forward-diffracted orders are dominant (transmission-grating behavior). Conversely, when backward-diffracted orders dominate (reflection-grating behavior), second derivatives of the field amplitudes and boundary diffraction need to be included to produce accurate results.",
"Two recursive and numerically stable matrix algorithms for modeling layered diffraction gratings, the S-matrix algorithm and the R-matrix algorithm, are systematically presented in a form that is independent of the underlying grating models, geometries, and mountings. Many implementation variants of the algorithms are also presented. Their physical interpretations are given, and their numerical stabilities and efficiencies are discussed in detail. The single most important criterion for achieving unconditional numerical stability with both algorithms is to avoid the exponentially growing functions in every step of the matrix recursion. From the viewpoint of numerical efficiency, the S-matrix algorithm is generally preferred to the R-matrix algorithm, but exceptional cases are noted.",
"The recent reformulation of the coupled-wave method by Lalanne and Morris [ J. Opt. Soc. Am. A13, 779 ( 1996)] and by Granet and Guizal [ J. Opt. Soc. Am. A13, 1019 ( 1996)], which dramatically improves the convergence of the method for met allic gratings in TM polarization, is given a firm mathematical foundation in this paper. The new formulation converges faster because it uniformly satisfies the boundary conditions in the grating region, whereas the old formulations do so only nonuniformly. Mathematical theorems that govern the factorization of the Fourier coefficients of products of functions having jump discontinuities are given. The results of this paper are applicable to any numerical work that requires the Fourier analysis of products of discontinuous periodic functions.",
"Gmsh is an open-source 3-D finite element grid generator with a build-in CAD engine and post-processor. Its design goal is to provide a fast, light and user-friendly meshing tool with parametric input and advanced visualization capabilities. This paper presents the overall philosophy, the main design choices and some of the original algorithms implemented in Gmsh. Copyright (C) 2009 John Wiley & Sons, Ltd.",
"",
"The development of numerical methods for solving the Helmholtz equation, which behaves robustly with respect to the wave number, is a topic of vivid research. It was observed that the solution of the Galerkin finite element method (FEM) differs significantly from the best approximation with increasing wave number. Many attempts have been presented in the literature to eliminate this lack of robustness by various modifications of the classical Galerkin FEM. However, we will prove that, in two and more space dimensions, it is impossible to eliminate this so-called pollution effect. Furthermore, we will present a generalized FEM in one dimension which behaves robustly with respect to the wave number."
]
} |
1907.06123 | 2961014033 | In this paper, we introduce the Preselection Bandit problem, in which the learner preselects a subset of arms (choice alternatives) for a user, which then chooses the final arm from this subset. The learner is not aware of the user's preferences, but can learn them from observed choices. In our concrete setting, we allow these choices to be stochastic and model the user's actions by means of the Plackett-Luce model. The learner's main task is to preselect subsets that eventually lead to highly preferred choices. To formalize this goal, we introduce a reasonable notion of regret and derive lower bounds on the expected regret. Moreover, we propose algorithms for which the upper bound on expected regret matches the lower bound up to a logarithmic term of the time horizon. | The flexible Pre-Bandit problem has obvious connections to the dueling bandits resp. battling bandits setting, with the freedom of adjusting the size of comparison for each time instance. In @cite_2 , the effect of this flexibility is investigated in an active PAC-framework for finding the best arm under the PL model, while the active top-k-arm identification problem in this model is studied in @cite_11 . Recently, this scenario was considered in terms of a regret minimization problem with top- @math -ranking feedback in @cite_13 , although the authors do not provide an algorithm for dealing with winner feedback (as we do in this paper). Yet, they provide gap-dependent lower bounds for winner feedback for a slightly different notion of regret. | {
"cite_N": [
"@cite_11",
"@cite_13",
"@cite_2"
],
"mid": [
"2964305024",
"2918221278",
"2894922130"
],
"abstract": [
"We study the active learning problem of top-k ranking from multi-wise comparisons under the popular multinomial logit model. Our goal is to identify the top-k items with high probability by adaptively querying sets for comparisons and observing the noisy output of the most preferred item from each comparison. To achieve this goal, we design a new active ranking algorithm without using any information about the underlying items' preference scores. We also establish a matching lower bound on the sample complexity even when the set of preference scores is given to the algorithm. These two results together show that the proposed algorithm is nearly instance optimal (similar to instance optimal [12], but up to polylog factors). Our work extends the existing literature on rank aggregation in three directions. First, instead of studying a static problem with fixed data, we investigate the top-k ranking problem in an active learning setting. Second, we show our algorithm is nearly instance optimal, which is a much stronger theoretical guarantee. Finally, we extend the pairwise comparison to the multi-wise comparison, which has not been fully explored in ranking literature.",
"We consider two regret minimisation problems over subsets of a finite ground set @math , with subset-wise relative preference information feedback according to the Multinomial logit choice model. The first setting requires the learner to test subsets of size bounded by a maximum size followed by receiving top- @math rank-ordered feedback, while in the second setting the learner is restricted to play subsets of a fixed size @math with a full ranking observed as feedback. For both settings, we devise new, order-optimal regret algorithms, and derive fundamental limits on the regret performance of online learning with subset-wise preferences. Our results also show the value of eliciting a general top @math -rank-ordered feedback over single winner feedback ( @math ).",
""
]
} |
1907.06214 | 2958892036 | One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task. We refer to any formalized approach to addressing this problem (learned or otherwise) as a task selection policy. In this work we provide an empirical evaluation of the performance of some common task selection policies in a synthetic bandit-style setting, as well as on the GLUE benchmark for natural language understanding. We connect task selection policy learning to existing work on automated curriculum learning and off-policy evaluation, and suggest a method based on counterfactual estimation that leads to improved model performance in our experimental settings. | Multitask learning has been studied extensively, and can be motivated as a means of inductive bias learning @cite_32 @cite_16 , representation learning @cite_4 @cite_25 , or as a form of learning to learn @cite_37 @cite_39 @cite_30 @cite_33 . In the context of natural language processing, MTL has been used to improve tasks such as semantic role labeling @cite_10 @cite_28 , and machine translation @cite_2 @cite_6 @cite_17 @cite_31 , and to learn general purpose sentence representations @cite_23 . One of the best performing models on the GLUE benchmark at the time of publication @cite_3 combines MTL pretraining (BERT) with MTL fine-tuning on the GLUE tasks, although the model also incorporates non-trivial task-specific components and additional training using knowledge distillation @cite_19 . | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_28",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"1510488293",
"2143419558",
"2165644552",
"2030290736",
"2962788148",
"1614862348",
"2963247703",
"99485931",
"2937297214",
"1821462560",
"2786685006",
"2963842982",
"2899590084",
"2162888803",
"",
"2339391301",
""
],
"abstract": [
"",
"A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks.",
"We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well-known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks.",
"This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task.",
"",
"This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined.",
"",
"Preface. Part I: Overview Articles. 1. Learning to Learn: Introduction and Overview S. Thrun, L. Pratt. 2. A Survey of Connectionist Network Reuse Through Transfer L. Pratt, B. Jennings. 3. Transfer in Cognition A. Robins. Part II: Prediction. 4. Theoretical Models of Learning to Learn J. Baxter. 5. Multitask Learning R. Caruana. 6. Making a Low-Dimensional Representation Suitable for Diverse Tasks N. Intrator, S. Edelman. 7. The Canonical Distortion Measure for Vector Quantization and Function Approximation J. Baxter. 8. Lifelong Learning Algorithms S. Thrun. Part III: Relatedness. 9. The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness D.L. Silver, R.E. Mercer. 10. Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge S. Thrun, J. O'Sullivan. Part IV: Control. 11. CHILD: A First Step Towards Continual Learning M.B. Ring. 12. Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al 13. Creating Advice-Taking Reinforcement Learners R. Maclin, J.W. Shavlik. Contributing Authors. Index.",
"This paper explores the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (, 2019) for learning text representations across multiple natural language understanding tasks. Although ensemble learning can improve model performance, serving an ensemble of large DNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge distillation method (, 2015) in the multi-task learning setting. For each task, we train an ensemble of different MT-DNNs (teacher) that outperforms any single model, and then train a single MT-DNN (student) via multi-task learning to knowledge from these ensemble teachers. We show that the distilled MT-DNN significantly outperforms the original MT-DNN on 7 out of 9 GLUE tasks, pushing the GLUE benchmark (single model) to 83.7 (1.5 absolute improvement Based on the GLUE leaderboard at this https URL as of April 1, 2019. ). The code and pre-trained models will be made publicly available at this https URL.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.",
"Abstract: Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Furthermore, we have established a new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the MTL context: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.",
"We frame unsupervised machine translation (MT) in the context of multi-task learning (MTL), combining insights from both directions. We leverage off-the-shelf neural MT architectures to train unsupervised MT models with no parallel data and show that such models can achieve reasonably good performance, competitive with models purpose-built for unsupervised MT. Finally, we propose improvements that allow us to apply our models to English-Turkish, a truly low-resource language pair.",
"A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.",
"",
"Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.",
""
]
} |
1907.06214 | 2958892036 | One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task. We refer to any formalized approach to addressing this problem (learned or otherwise) as a task selection policy. In this work we provide an empirical evaluation of the performance of some common task selection policies in a synthetic bandit-style setting, as well as on the GLUE benchmark for natural language understanding. We connect task selection policy learning to existing work on automated curriculum learning and off-policy evaluation, and suggest a method based on counterfactual estimation that leads to improved model performance in our experimental settings. | However, MTL in NLP is not always successful --- the GLUE baseline MTL models are significantly worse than the single task models @cite_42 , alonso2016 only find significant improvements with MTL in one of five tasks evaluated, and the multitask model in mccann2018 does not quite reach the performance of the same model trained on each task individually. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2799054028"
],
"abstract": [
"For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is model-agnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems."
]
} |
1907.06214 | 2958892036 | One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task. We refer to any formalized approach to addressing this problem (learned or otherwise) as a task selection policy. In this work we provide an empirical evaluation of the performance of some common task selection policies in a synthetic bandit-style setting, as well as on the GLUE benchmark for natural language understanding. We connect task selection policy learning to existing work on automated curriculum learning and off-policy evaluation, and suggest a method based on counterfactual estimation that leads to improved model performance in our experimental settings. | Counterfactual estimation @cite_0 tries to answer the question what would have happened if an agent had taken different actions?'', and so is closely related to the problem of off-policy evaluation in the context of reinforcement learning Counterfactual estimation has also been studied under the names counterfactual reasoning'' and learning from logged bandit feedback''. . This setting presents an added set of challenges to on-policy evaluation, as an agent only has partial feedback from the environment, and it is assumed that collecting additional feedback is either not possible or prohibitively expensive. Typically approaches to off-policy evaluation are based on either modeling the environment dynamics and reward, using importance sampling, or a combination of the two @cite_44 @cite_18 @cite_35 @cite_26 @cite_21 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_26",
"@cite_21",
"@cite_0",
"@cite_44"
],
"mid": [
"2964297722",
"2253356247",
"1835900096",
"2234859443",
"2165352044",
"1514587017"
],
"abstract": [
"We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and overcome the weaknesses of the two approaches by applying the doubly robust technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice.",
"",
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem (, 2013) through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. In analogy to the Structural Risk Minimization principle of Wapnik and Tscherwonenkis (1979), these constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method--called Policy Optimizer for Exponential Models (POEM)--for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. The effectiveness and efficiency of POEM is evaluated on several simulated multi-label classification problems, as well as on a real-world information retrieval problem. The empirical results show that the CRM objective implemented in POEM provides improved robustness and generalization performance compared to the state-of-the-art.",
"We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL in real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.",
"This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments carried out on the ad placement system associated with the Bing search engine.",
"Eligibility traces have been shown to speed reinforcement learning, to make it more robust to hidden states, and to provide a link between Monte Carlo and temporal-difference methods. Here we generalize eligibility traces to off-policy learning, in which one learns about a policy different from the policy that generates the data. Off-policy methods can greatly multiply learning, as many policies can be learned about from the same data stream, and have been identified as particularly useful for learning about subgoals and temporally extended macro-actions. In this paper we consider the off-policy version of the policy evaluation problem, for which only one eligibility trace algorithm is known, a Monte Carlo method. We analyze and compare this and four new eligibility trace algorithms, emphasizing their relationships to the classical statistical technique known as importance sampling. Our main results are 1) to establish the consistency and bias properties of the new methods and 2) to empirically rank the new methods, showing improvement over one-step and Monte Carlo methods. Our results are restricted to model-free, table-lookup methods and to offline updating (at the end of each episode) although several of the algorithms could be applied more generally."
]
} |
1907.06226 | 2962669974 | Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. We present a simple BERT-based LS approach that makes use of the pre-trained unsupervised deep bidirectional representations BERT. We feed the given sentence masked the complex word into the masking language model of BERT to generate candidate substitutions. By considering the whole sentence, the generated simpler alternatives are easier to hold cohesion and coherence of a sentence. Experimental results show that our approach obtains obvious improvement on standard LS benchmark. | As complex and simplified parallel corpora are available, especially, the 'ordinary' English Wikipedia (EW) in combination with the 'simple' English Wikipedia (SEW), the paradigm shift of LS systems is from knowledge-based to data-driven simplification @cite_5 @cite_23 @cite_0 . (2010) identified lexical simplifications from the edit history of SEW. They utilized a probabilistic method to recognize simplification edits distinguishing from other types of content changes. (2011) considered every pair of the distinct word in the EW and SEW to be a possible simplification pair, and filtered part of them based on morphological variants and WordNet. (2014) also generated the candidate rules from the EW and SEW, and adopted a context-aware binary classifier to decide whether a candidate rule should be adopted or not in a certain context. The main limitation of the type of methods relies heavily on simplified corpora. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_23"
],
"mid": [
"",
"2153982529",
"1638924786"
],
"abstract": [
"",
"We present a method for lexical simplification. Simplification rules are learned from a comparable corpus, and the rules are applied in a context-aware fashion to input sentences. Our method is unsupervised. Furthermore, it does not require any alignment or correspondence among the complex and simple corpora. We evaluate the simplification according to three criteria: preservation of grammaticality, preservation of meaning, and degree of simplification. Results show that our method outperforms an established simplification baseline for both meaning preservation and simplification, while maintaining a high level of grammaticality.",
"We report on work in progress on extracting lexical simplifications (e.g., \"collaborate\" → \"work together\"), focusing on utilizing edit histories in Simple English Wikipedia for this task. We consider two main approaches: (1) deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and (2) using metadata to focus on edits that are more likely to be simplification operations. We find our methods to outperform a reasonable baseline and yield many high-quality lexical simplifications not included in an independently-created manually prepared list."
]
} |
1907.06226 | 2962669974 | Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. We present a simple BERT-based LS approach that makes use of the pre-trained unsupervised deep bidirectional representations BERT. We feed the given sentence masked the complex word into the masking language model of BERT to generate candidate substitutions. By considering the whole sentence, the generated simpler alternatives are easier to hold cohesion and coherence of a sentence. Experimental results show that our approach obtains obvious improvement on standard LS benchmark. | In this paper, we will first present a BERT-based LS approach that requires only a sufficiently large corpus of regular text without any manual efforts. Pre-training language models @cite_15 @cite_18 @cite_12 have attracted wide attention and has shown to be effective for improving many downstream natural language processing tasks. Our method exploits recent advances in BERT to generate suitable simplifications for complex words. Our method generates the candidates of the complex word by considering the whole sentence that is easier to hold cohesion and coherence of a sentence. In this case, many steps used in existing steps have been eliminated from our method, e.g., morphological transformation and substitution selection. Due to its fundamental nature, our approach can be applied to many languages. | {
"cite_N": [
"@cite_15",
"@cite_18",
"@cite_12"
],
"mid": [
"2896457183",
"2911489562",
"2914120296"
],
"abstract": [
"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in machine learning, extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, as deep learning models require a large amount of training data, applying deep learning to biomedical text mining is often unsuccessful due to the lack of training data in biomedical fields. Recent researches on training contextualized language representation models on text corpora shed light on the possibility of leveraging a large number of unannotated biomedical text corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain specific language representation model pre-trained on large-scale biomedical corpora. Based on the BERT architecture, BioBERT effectively transfers the knowledge from a large amount of biomedical texts to biomedical text mining models with minimal task-specific architecture modifications. While BERT shows competitive performances with previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.51 absolute improvement), biomedical relation extraction (3.49 absolute improvement), and biomedical question answering (9.61 absolute improvement). We make the pre-trained weights of BioBERT freely available at this https URL, and the source code for fine-tuning BioBERT available at this https URL.",
"Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9 accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available."
]
} |
1907.06167 | 2957307908 | Food classification is a challenging problem due to the large number of categories, high visual similarity between different foods, as well as the lack of datasets for training state-of-the-art deep models. Solving this problem will require advances in both computer vision models as well as datasets for evaluating these models. In this paper we focus on the second aspect and introduce FoodX-251, a dataset of 251 fine-grained food categories with 158k images collected from the web. We use 118k images as a training set and provide human verified labels for 40k images that can be used for validation and testing. In this work, we outline the procedure of creating this dataset and provide relevant baselines with deep learning models. The FoodX-251 dataset has been used for organizing iFood-2019 challenge in the Fine-Grained Visual Categorization workshop (FGVC6 at CVPR 2019) and is available for download. | Earlier works have tried to tackle the issue of limited datasets for food classification by collecting training data using human annotators or crowd-sourcing platforms @cite_30 @cite_27 @cite_9 @cite_14 @cite_10 . Such data curation is expensive and limits the scalability in terms of number of training categories as well as number of training samples per category. Moreover, it is challenging to label images for food classification tasks as they often have co-occurring food items, partially occluded food items, and large variability in scale and viewpoints. Accurate annotation of these images would require bounding boxes, making data curation even more time and cost prohibitive. Thus, it is important to build food datasets with minimal data curation so that they can be scaled to novel categories based on the final application. Our solution is motivated by recent advances in exploiting the knowledge available in web-search engines and using it to collect a large-scale dataset with minimal supervision @cite_21 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_27",
"@cite_10"
],
"mid": [
"",
"2007121115",
"1030521295",
"2778627217",
"2057352599",
""
],
"abstract": [
"",
"We present snap-n-eat, a mobile food recognition system. The system can recognize food and estimate the calorific and nutrition content of foods automatically without any user intervention. To identify food items, the user simply snaps a photo of the food plate. The system detects the salient region, crops its image, and subtracts the background accordingly. Hierarchical segmentation is performed to segment the image into regions. We then extract features at different locations and scales and classify these regions into different kinds of foods using a linear support vector machine classifier. In addition, the system determines the portion size which is then used to estimate the calorific and nutrition content of the food present on the plate. Previous approaches have mostly worked with either images captured in a lab setting, or they require additional user input (eg, user crop bounding boxes). Our system achieves automatic food detection and recognition in real-life settings containing cluttered backgrounds. When multiple food items appear in an image, our system can identify them and estimate their portion size simultaneously. We implemented this system as both an Android smartphone application and as a web service. In our experiments, we have achieved above 85 accuracy when detecting 15 different kinds of foods.",
"In this paper, we propose a novel effective framework to expand an existing image dataset automatically leveraging existing categories and crowdsourcing. Especially, in this paper, we focus on expansion on food image data set. The number of food categories is uncountable, since foods are different from a place to a place. If we have a Japanese food dataset, it does not help build a French food recognition system directly. That is why food data sets for different food cultures have been built independently so far. Then, in this paper, we propose to leverage existing knowledge on food of other cultures by a generic “foodness” classifier and domain adaptation. This can enable us not only to built other-cultured food datasets based on an original food image dataset automatically, but also to save as much crowd-sourcing costs as possible. In the experiments, we show the effectiveness of the proposed method over the baselines.",
"Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3 to 72.8 . To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2 . Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.",
"Computer-aided food identification and quantity estimation have caught more attention in recent years because of the growing concern of our health. The identification problem is usually defined as an image categorization or classification problem and several researches have been proposed. In this paper, we address the issues of feature descriptors in the food identification problem and introduce a preliminary approach for the quantity estimation using depth information. Sparse coding is utilized in the SIFT and Local binary pattern feature descriptors, and these features combined with gabor and color features are used to represent food items. A multi-label SVM classifier is trained for each feature, and these classifiers are combined with multi-class Adaboost algorithm. For evaluation, 50 categories of worldwide food are used, and each category contains 100 photographs from different sources, such as manually taken or from Internet web albums. An overall accuracy of 68.3 is achieved, and success at top-N candidates achieved 80.6 , 84.8 , and 90.9 accuracy accordingly when N equals 2, 3, and 5, thus making mobile application practical. The experimental results show that the proposed methods greatly improve the performance of original SIFT and LBP feature descriptors. On the other hand, for quantity estimation using depth information, a straight forward method is proposed for certain food, while transparent food ingredients such as pure water and cooked rice are temporarily excluded.",
""
]
} |
1907.06167 | 2957307908 | Food classification is a challenging problem due to the large number of categories, high visual similarity between different foods, as well as the lack of datasets for training state-of-the-art deep models. Solving this problem will require advances in both computer vision models as well as datasets for evaluating these models. In this paper we focus on the second aspect and introduce FoodX-251, a dataset of 251 fine-grained food categories with 158k images collected from the web. We use 118k images as a training set and provide human verified labels for 40k images that can be used for validation and testing. In this work, we outline the procedure of creating this dataset and provide relevant baselines with deep learning models. The FoodX-251 dataset has been used for organizing iFood-2019 challenge in the Fine-Grained Visual Categorization workshop (FGVC6 at CVPR 2019) and is available for download. | UEC256 @cite_9 consists of @math categories with bounding box indicating the location of its category label. However, it mostly contains Japanese food items. ChineseFoodNet @cite_24 consists of @math images from @math categories but is restricted to Chinese food items only. NutriNet dataset @cite_29 contains @math images from @math food and drink classes but is limited to Central European food items. In comparison to these datasets, out dataset consists of miscellaneous food items from various cuisines. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_29"
],
"mid": [
"2613010453",
"1030521295",
"2643943583"
],
"abstract": [
"In this paper, we introduce a new and challenging large-scale food image dataset called \"ChineseFoodNet\", which aims to automatically recognizing pictured Chinese dishes. Most of the existing food image datasets collected food images either from recipe pictures or selfie. In our dataset, images of each food category of our dataset consists of not only web recipe and menu pictures but photos taken from real dishes, recipe and menu as well. ChineseFoodNet contains over 180,000 food photos of 208 categories, with each category covering a large variations in presentations of same Chinese food. We present our efforts to build this large-scale image dataset, including food category selection, data collection, and data clean and label, in particular how to use machine learning methods to reduce manual labeling work that is an expensive process. We share a detailed benchmark of several state-of-the-art deep convolutional neural networks (CNNs) on ChineseFoodNet. We further propose a novel two-step data fusion approach referred as \"TastyNet\", which combines prediction results from different CNNs with voting method. Our proposed approach achieves top-1 accuracies of 81.43 on the validation set and 81.55 on the test set, respectively. The latest dataset is public available for research and can be achieved at this https URL.",
"In this paper, we propose a novel effective framework to expand an existing image dataset automatically leveraging existing categories and crowdsourcing. Especially, in this paper, we focus on expansion on food image data set. The number of food categories is uncountable, since foods are different from a place to a place. If we have a Japanese food dataset, it does not help build a French food recognition system directly. That is why food data sets for different food cultures have been built independently so far. Then, in this paper, we propose to leverage existing knowledge on food of other cultures by a generic “foodness” classifier and domain adaptation. This can enable us not only to built other-cultured food datasets based on an original food image dataset automatically, but also to save as much crowd-sourcing costs as possible. In the experiments, we show the effectiveness of the proposed method over the baselines.",
"Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72 , along with an accuracy of 94.47 on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients."
]
} |
1907.05905 | 2738314890 | This paper describes a preliminary investigation of Voice Pathology Detection using Deep Neural Networks (DNN). We used voice recordings of sustained vowel a produced at normal pitch from German corpus Saarbruecken Voice Database (SVD). This corpus contains voice recordings and electroglottograph signals of more than 2 000 speakers. The idea behind this experiment is the use of convolutional layers in combination with recurrent Long-Short-Term-Memory (LSTM) layers on raw audio signal. Each recording was split into 64 ms Hamming windowed segments with 30 ms overlap. Our trained model achieved 71.36 accuracy with 65.04 sensitivity and 77.67 specificity on 206 validation files and 68.08 accuracy with 66.75 sensitivity and 77.89 specificity on 874 testing files. This is a promising result in favor of this approach because it is comparable to similar previously published experiment that used different methodology. Further investigation is needed to achieve the state-of-the-art results. | The results vary greatly between the published papers mainly due to differences between sets of data that were used for the experiment. Mart ' i in @cite_21 reported 72 , | {
"cite_N": [
"@cite_21"
],
"mid": [
"131583169"
],
"abstract": [
"The paper presents a set of experiments on pathological voice detection over the Saarbrucken Voice Database (SVD) by using the MultiFocal toolkit for a discriminative calibration and fusion. The SVD is freely available online containing a collection of voice recordings of different pathologies, including both functional and organic. A generative Gaussian mixture model trained with mel-frequency cepstral coefficients, harmonics-to-noise ratio, normalized noise energy and glottal-to-noise excitation ratio, is used as classifier. Scores are calibrated to increase performance at the desired operating point. Finally, the fusion of different recordings for each speaker, in which vowels a , i and u are pronounced with normal, low, high, and low-high-low intonations, offers a great increase in the performance. Results are compared with the Massachusetts Eye and Ear Infirmary (MEEI) database, which makes possible to see that SVD is much more challenging."
]
} |
1907.05945 | 2958639220 | We propose NH-TTC, a general method for fast, anticipatory collision avoidance for autonomous robots having arbitrary equations of motions. Our proposed approach exploits implicit differentiation and subgradient descent to locally optimize the non-convex and non-smooth cost functions that arise from planning over the anticipated future positions of nearby obstacles. The result is a flexible framework capable of supporting high-quality, collision-free navigation with a wide variety of robot motion models in various challenging scenarios. We show results for different navigating tasks, with our method controlling various numbers of agents (with and without reciprocity), on both physical differential drive robots, and simulated robots with different motion models and kinematic and dynamic constraints, including acceleration-controlled agents, differential-drive agents, and smooth car-like agents. The resulting paths are high quality and collision-free, while needing only a few milliseconds of computation as part of an integrated sense-plan-act navigation loop. | More closely related to our work is the Generalized Reciprocal Velocity Obstacles (GRVO) approach of @cite_5 that provides a generalized framework for local navigation supporting robots with both linear and nonlinear equations of motions. Similar to GVO, GRVO plans over potential robot controls, but does so through an indirect manner by representing a set of controls over time as a single high-level target velocity. GRVO then ORCAfies" this space of target velocity using linear approximations to represent which target velocities may lead to collisions. However, GRVO, and other ORCA-like approaches, can be overly conservative in their approximations, forbidding large amounts of admissible controls due to the linearization of control constraints. Furthermore, these approaches all typically use a binary indicator cost function to assess the quality of a given control input; a control is either strictly forbidden or fully allowed. Together, these factors can lead to inefficient robot behavior, especially in highly constrained scenarios. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1965760189"
],
"abstract": [
"Reciprocal collision avoidance has become a popular area of research over recent years. Approaches have been developed for a variety of dynamic systems ranging from single integrators to car-like, differential-drive, and arbitrary, linear equations of motion. In this paper, we present two contributions. First, we provide a unification of these previous approaches under a single, generalized representation using control obstacles. In particular, we show how velocity obstacles, acceleration velocity obstacles, continuous control obstacles, and LQR-obstacles are special instances of our generalized framework. Secondly, we present an extension of control obstacles to general reciprocal collision avoidance for non-linear, non-homogeneous systems where the robots may have different state spaces and different non-linear equations of motion from one another. Previous approaches to reciprocal collision avoidance could not be applied to such systems, as they use a relative formulation of the equations of motion and can, therefore, only apply to homogeneous, linear systems where all robots have the same linear equations of motion. Our approach allows for general mobile robots to independently select new control inputs while avoiding collisions with each other. We implemented our approach in simulation for a variety of mobile robots with non-linear equations of motion: differential-drive, differential-drive with a trailer, car-like, and hovercrafts. We also performed physical experiments with a combination of differential-drive, differential-drive with a trailer, and car-like robots. Our results show that our approach is capable of letting a non-homogeneous group of robots with non-linear equations of motion safely avoid collisions at real-time computation rates."
]
} |
1907.06085 | 2956903327 | We consider a quantity that measures the roundness of a bounded, convex @math -polytope in @math . We majorise this quantity in terms of the smallest singular value of the matrix of outer unit normals to the facets of the polytope. | Similar quantities appear also in the literature on finite volume methods. For example, in @cite_3 , a finite volume method based on Voronoi tessellations of the sphere is constructed, and a regularity norm' is defined for the Voronoi tessellation by minimising the ratio @math over all neighbours of each Voronoi cell @math generated by a point @math on the sphere, and over all Voronoi cells @math in the tessellation. Note that in both @math and @math above are computed with respect to the geodesic metric on the sphere. For any given Voronoi cell @math with generator @math , the smallest value of @math over all generators @math of adjacent Voronoi cells @math resembles the degeneracy ratio of @math defined in . This is because if @math is a Voronoi cell with generator @math , and if @math is small, then @math will be small, and vice versa. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2161434505"
],
"abstract": [
"We first develop and analyze a finite volume scheme for the discretization of partial differential equations (PDEs) on the sphere; the scheme uses Voronoi tessellations of the sphere. For a model convection–diffusion problem, the finite volume scheme is shown to produce first-order accurate approximations with respect to a mesh-dependent discrete firstderivative norm. Then, we introduce the notion of constrained centroidal Voronoi tessellations (CCVTs) of the sphere; these are special Voronoi tessellation of the sphere for which the generators of the Voronoi cells are also the constrained centers of mass, with respect to a prescribed density function, of the cells. After discussing an algorithm for determining CCVT meshes on the sphere, we discuss and illustrate several desirable properties possessed by these meshes. In particular, it is shown that CCVT meshes define very high-quality uniform and non-uniform meshes on the sphere. Finally, we discuss, through some computational experiments, the performance of the CCVT meshes used in conjunction with the finite volume scheme for the solution of simple model PDEs on the sphere. The experiments show, for example, that the CCVT based finite volume approximations are second-order accurate if errors are measured in"
]
} |
1907.06078 | 2956637491 | Despite the widespread use of supervised deep learning methods for affect recognition from speech, they are severely limited by the lack of a sufficient amount of labelled speech data. Considering the abundant availability of unlabelled data, this paper proposed a semi-supervised model that can effectively utilise the unlabelled data in multi-task learning way in order to improve the performance of speech emotion recognition. The proposed model adversarialy learns a shared representation for two auxiliary tasks along with emotion identification as the main task. We consider speaker and gender identification as auxiliary tasks in order to operate the model on any large audio corpus. We demonstrate that in a scenario with limited labelled training samples, one can significantly improve the performance of a supervised classification task by simultaneously training with additional auxiliary tasks having an availability of large amount of data. The proposed model is rigorously evaluated for both categorical and dimensional emotion classification tasks. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on two publicly available datasets. | Multi-task learning (MTL) has been successful for simultaneously modelling multiple related tasks utilising shared representation @cite_28 @cite_44 . It aims to improve generalisation by learning the similarities as well as the differences among the given tasks from the training data @cite_69 . The standard methodology to optimise a machine learning model for one task at a time ignores potentially rich information in the training signal @cite_63 . Such information can be effectively utilised for auxiliary tasks to improve generalisation and performance of system. Several MTL approaches @cite_68 @cite_6 @cite_36 have been widely used for solving problems in computer vision. The primary reason to use MTL in vision is that images can provide information related to different tasks, and simultaneously learning these correlated tasks can boost the performance of each individual task @cite_37 @cite_79 . For example, face detection, gender recognition, and pose estimation can be simultaneously performed using deep CNN @cite_80 . | {
"cite_N": [
"@cite_69",
"@cite_37",
"@cite_28",
"@cite_36",
"@cite_6",
"@cite_44",
"@cite_79",
"@cite_63",
"@cite_80",
"@cite_68"
],
"mid": [
"",
"204612701",
"2036043322",
"2022508996",
"1896424170",
"2162888803",
"2047508432",
"2937977583",
"2963377935",
"2078224158"
],
"abstract": [
"",
"We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.",
"The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].",
"A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.",
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",
"Despite the increasing research interest in end-to-end learning systems for speech emotion recognition, conventional systems either suffer from the overfitting due in part to the limited training data, or do not explicitly consider the different contributions of automatically learnt representations for a specific task. In this contribution, we propose a novel end-to-end framework which is enhanced by learning other auxiliary tasks and an attention mechanism. That is, we jointly train an end-to-end network with several different but related emotion prediction tasks, i. e., arousal, valence, and dominance predictions, to extract more robust representations shared among various tasks than traditional systems with the hope that it is able to relieve the overfitting problem. Meanwhile, an attention layer is implemented on top of the layers for each task, with the aim to capture the contribution distribution of different segment parts for each individual task. To evaluate the effectiveness of the proposed system, we conducted a set of experiments on the widely used database IEMOCAP. The empirical results show that the proposed systems significantly outperform corresponding baseline systems.",
"We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.",
"We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network. In particular, we simultaneously learn a pose-joint regressor and a sliding-window body-part detector in a deep network architecture. We show that including the body-part detection task helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several data sets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts."
]
} |
1907.06078 | 2956637491 | Despite the widespread use of supervised deep learning methods for affect recognition from speech, they are severely limited by the lack of a sufficient amount of labelled speech data. Considering the abundant availability of unlabelled data, this paper proposed a semi-supervised model that can effectively utilise the unlabelled data in multi-task learning way in order to improve the performance of speech emotion recognition. The proposed model adversarialy learns a shared representation for two auxiliary tasks along with emotion identification as the main task. We consider speaker and gender identification as auxiliary tasks in order to operate the model on any large audio corpus. We demonstrate that in a scenario with limited labelled training samples, one can significantly improve the performance of a supervised classification task by simultaneously training with additional auxiliary tasks having an availability of large amount of data. The proposed model is rigorously evaluated for both categorical and dimensional emotion classification tasks. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on two publicly available datasets. | Another stream of research, instead of using different emotional attributes as auxiliary tasks, has utilised other available attributes, such as speaker and gender to improve the performance of SER @cite_52 . For instance, @cite_33 used gender and naturalness (natural or acted corpus) recognition as auxiliary tasks to improve the performance of emotion recognition using different emotional databases. @cite_21 used MTL approach to investigate the influence of domain (whether the expression is spoken or sung), corpus, and gender on the generalisability of cross-corpus emotion recognition systems. The authors used six emotional databases and showed that the performance of SER system is increased with the number of training emotional corpora. | {
"cite_N": [
"@cite_21",
"@cite_52",
"@cite_33"
],
"mid": [
"2598545578",
"2963929227",
"2749894918"
],
"abstract": [
"There is growing interest in emotion recognition due to its potential in many applications. However, a pervasive challenge is the presence of data variability caused by factors such as differences across corpora, speaker’s gender, and the “domain” of expression (e.g., whether the expression is spoken or sung). Prior work has addressed this challenge by combining data across corpora and or genders, or by explicitly controlling for these factors. In this work, we investigate the influence of corpus, domain, and gender on the cross-corpus generalizability of emotion recognition systems. We use a multi-task learning approach, where we define the tasks according to these factors. We find that incorporating variability caused by corpus, domain, and gender through multi-task learning outperforms approaches that treat the tasks as either identical or independent. Domain is a larger differentiating factor than gender for multi-domain data. When considering only the speech domain, gender and corpus are similarly influential. Defining tasks by gender is more beneficial than by either corpus or corpus and gender for valence, while the opposite holds for activation. On average, cross-corpus performance increases with the number of training corpora. The results demonstrate that effective cross-corpus modeling requires that we understand how emotion expression patterns change as a function of non-emotional factors.",
"Long short-term memory (LSTM) is normally used in recurrent neural network (RNN) as basic recurrent unit. However, conventional LSTM assumes that the state at current time step depends on previous time step. This assumption constraints the time dependency modeling capability. In this study, we propose a new variation of LSTM, advanced LSTM (A-LSTM), for better temporal context modeling. We employ A-LSTM in weighted pooling RNN for emotion recognition. The A-LSTM outperforms the conventional LSTM by 5.5 relatively. The A-LSTM based weighted pooling RNN can also complement the state-of-the-art emotion classification framework. This shows the advantage of A-LSTM.",
"One of the challenges in Speech Emotion Recognition (SER) \"in the wild\" is the large mismatch between training and test data (e.g. speakers and tasks). In order to improve the generalisation capabilities of the emotion models, we propose to use Multi-Task Learning (MTL) and use gender and naturalness as auxiliary tasks in deep neural networks. This method was evaluated in within-corpus and various cross-corpus classification experiments that simulate conditions \"in the wild\". In comparison to Single-Task Learning (STL) based state of the art methods, we found that our MTL method proposed improved performance significantly. Particularly, models using both gender and naturalness achieved more gains than those using either gender or naturalness separately. This benefit was also found in the high-level representations of the feature space, obtained from our method proposed, where discriminative emotional clusters could be observed."
]
} |
1907.05820 | 2958586587 | We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video -- addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as adaptive photometric costs that support multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online finetuning strategies that rely on the symmetry of our self-supervised loss in both training and testing, in particular optimizing model parameters and or the output of different tasks, leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning. | Understanding the geometry and dynamics of 3D scenes usually involves estimating multiple properties including depth, camera motion or optical flow. (SfM) @cite_34 @cite_33 @cite_14 @cite_21 @cite_40 @cite_19 or scene flow estimation @cite_25 @cite_42 are well-established methodologies with a solid record of fundamental and practical progress. Classical approaches are based on low-level feature matching followed by geometric verification using algebraic relations, the epipolar constraint, in conjunction with RANSAC @cite_39 , and bundle adjustment @cite_16 . Models for structure and motion estimation based on such pipelines have produced impressive results @cite_54 , but their reconstructions are often sparse and prone to error in textureless or occluded areas. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_54",
"@cite_21",
"@cite_42",
"@cite_16",
"@cite_39",
"@cite_19",
"@cite_40",
"@cite_34",
"@cite_25"
],
"mid": [
"2108134361",
"2033819227",
"2471962767",
"",
"",
"",
"2130017587",
"",
"",
"2008706659",
"1921093919"
],
"abstract": [
"DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.",
"From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.",
"Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.",
"",
"",
"",
"A new enhancement of ransac, the locally optimized ransac (lo-ransac), is introduced. It has been observed that, to find an optimal solution (with a given probability), the number of samples drawn in ransac is significantly higher than predicted from the mathematical model. This is due to the incorrect assumption, that a model with parameters computed from an outlier-free sample is consistent with all inliers. The assumption rarely holds in practice. The locally optimized ransac makes no new assumptions about the data, on the contrary – it makes the above-mentioned assumption valid by applying local optimization to the solution estimated from the random sample.",
"",
"",
"This paper introduces an approach for enabling existing multi-view stereo methods to operate on extremely large unstructured photo collections. The main idea is to decompose the collection into a set of overlapping sets of photos that can be processed in parallel, and to merge the resulting reconstructions. This overlapping clustering problem is formulated as a constrained optimization and solved iteratively. The merging algorithm, designed to be parallel and out-of-core, incorporates robust filtering steps to eliminate low-quality reconstructions and enforce global visibility constraints. The approach has been tested on several large datasets downloaded from Flickr.com, including one with over ten thousand images, yielding a 3D reconstruction with nearly thirty million points.",
"This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods."
]
} |
1907.05820 | 2958586587 | We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video -- addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as adaptive photometric costs that support multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online finetuning strategies that rely on the symmetry of our self-supervised loss in both training and testing, in particular optimizing model parameters and or the output of different tasks, leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning. | To address the bottleneck of accurate feature matching, recent work has focused on deep-learning-based approaches for geometric reasoning. Several methods train networks based on ground-truth labels and have been successfully applied to many tasks, including monocular depth estimation @cite_57 @cite_23 @cite_30 @cite_3 , optical flow @cite_24 @cite_6 @cite_32 @cite_22 , geometric matching @cite_50 , disparity from stereo images @cite_35 , or camera pose estimation @cite_58 @cite_12 @cite_7 . Because often geometric reasoning tasks are intrinsically coupled, several methods aim to leverage their inter-dependencies by joint supervision @cite_5 @cite_20 . Ilg al @cite_26 jointly estimate disparity, flow and boundaries. Depth estimation is also shown to benefit from combining with surface normals @cite_3 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_57",
"@cite_24",
"@cite_23",
"@cite_50",
"@cite_5",
"@cite_58",
"@cite_20",
"@cite_12"
],
"mid": [
"2963591054",
"2604231069",
"2887479417",
"2963782415",
"2200124539",
"2548527721",
"2124907686",
"2560474170",
"1905829557",
"764651262",
"2171740948",
"2604233003",
"2624871570",
"2963024893",
"2798512429",
""
],
"abstract": [
"This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem’s geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new stateof-the-art benchmark, while being significantly faster than competing approaches.",
"Occlusions play an important role in disparity and optical flow estimation, since matching costs are not available in occluded areas and occlusions indicate depth or motion boundaries. Moreover, occlusions are relevant for motion segmentation and scene flow estimation. In this paper, we present an efficient learning-based approach to estimate occlusion areas jointly with disparities or optical flow. The estimated occlusions and motion boundaries clearly improve over the state-of-the-art. Moreover, we present networks with state-of-the-art performance on the popular KITTI benchmark and good generic performance. Making use of the estimated occlusions, we also show improved results on motion segmentation and scene flow estimation.",
"We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.",
"We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.",
"Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.",
"We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6° accuracy for very large scale outdoor scenes and 0.5m and 10° accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.",
"Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2 3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases.",
""
]
} |
1907.05820 | 2958586587 | We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video -- addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as adaptive photometric costs that support multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online finetuning strategies that rely on the symmetry of our self-supervised loss in both training and testing, in particular optimizing model parameters and or the output of different tasks, leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning. | To train using ground-truth labels, different authors rely either on synthetic dataset creation @cite_55 or on specialized equipment ( LIDAR) for data collection @cite_52 @cite_27 , which is expensive in practice. To reduce the amount of ground-truth labels required for training, unsupervised approaches recently emerged. The core idea is to generate a differentiable warping between two images and use an underlying photometric loss as proxy to train a self-supervised model. This has been applied to stereo images @cite_2 @cite_43 @cite_48 and optical flow @cite_46 @cite_53 @cite_1 . | {
"cite_N": [
"@cite_48",
"@cite_55",
"@cite_53",
"@cite_1",
"@cite_52",
"@cite_43",
"@cite_27",
"@cite_2",
"@cite_46"
],
"mid": [
"",
"2259424905",
"",
"",
"2115579991",
"2520707372",
"",
"2949634581",
"2507953016"
],
"abstract": [
"",
"Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"",
"",
"We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.",
"Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset."
]
} |
1907.05820 | 2958586587 | We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video -- addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as adaptive photometric costs that support multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online finetuning strategies that rely on the symmetry of our self-supervised loss in both training and testing, in particular optimizing model parameters and or the output of different tasks, leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning. | In this work, we focus on learning from monocular video. Many recent methods go along similar lines. Zhou al @cite_49 couple the learning of monocular depth and ego-motion. Vijayanarasimhan al @cite_8 learn object masks and rigid motion parameters of several objects. Subsequent methods further improved performance using various techniques, Mahjourian al @cite_11 compute a 3D loss using ICP, and Klodt al @cite_4 use a supervision term that relies on the SfM output to regularize and unsupervised learning process. The learning can be facilitated by integrating other cues such as optical flow @cite_9 @cite_45 , edges @cite_56 or modeling multiple rigid motions @cite_29 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_56",
"@cite_45",
"@cite_49",
"@cite_11"
],
"mid": [
"2895192073",
"2608018946",
"2963583471",
"2901241793",
"2963549785",
"",
"2609883120",
"2963906250"
],
"abstract": [
"Recent work has demonstrated that it is possible to learn deep neural networks for monocular depth and ego-motion estimation from unlabelled video sequences, an interesting theoretical development with numerous advantages in applications. In this paper, we propose a number of improvements to these approaches. First, since such self-supervised approaches are based on the brightness constancy assumption, which is valid only for a subset of pixels, we propose a probabilistic learning formulation where the network predicts distributions over variables rather than specific values. As these distributions are conditioned on the observed image, the network can learn which scene and object types are likely to violate the model assumptions, resulting in more robust learning. We also propose to build on dozens of years of experience in developing handcrafted structure-from-motion (SFM) algorithms. We do so by using an off-the-shelf SFM system to generate a supervisory signal for the deep neural network. While this signal is also noisy, we show that our probabilistic formulation can learn and account for the defects of SFM, helping to integrate different sources of information and boosting the overall performance of the network.",
"We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the re-projection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfM-Net extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.",
"Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at this https URL.",
"Learning to estimate 3D geometry in a single image by watching unlabeled videos via deep convolutional network is attracting significant attention. In this paper, we introduce a \"3D as-smooth-as-possible (3D-ASAP)\" prior inside the pipeline, which enables joint estimation of edges and 3D scene, yielding results with significant improvement in accuracy for fine detailed structures. Specifically, we define the 3D-ASAP prior by requiring that any two points recovered in 3D from an image should lie on an existing planar surface if no other cues provided. We design an unsupervised framework that Learns Edges and Geometry (depth, normal) all at Once (LEGO). The predicted edges are embedded into depth and surface normal smoothness terms, where pixels without edges in-between are constrained to satisfy the prior. In our framework, the predicted depths, normals and edges are forced to be consistent all the time. We conduct experiments on KITTI to evaluate our estimated geometry and CityScapes to perform edge evaluation. We show that in all of the tasks, i.e. depth, normal and edge, our algorithm vastly outperforms other state-of-the-art (SOTA) algorithms, demonstrating the benefits of our approach.",
"",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.1"
]
} |
1907.06013 | 2961188780 | This paper describes Motion Planning Networks (MPNet), a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and unseen environments. It receives environment information as point-clouds, as well as a robot's initial and desired goal configurations and recursively calls itself to bidirectionally generate connectable paths. In addition to finding directly connectable and near-optimal paths in a single pass, we show that worst-case theoretical guarantees can be proven if we merge this neural network strategy with classical sample-based planners in a hybrid approach while still retaining significant computational and optimality improvements. To learn the MPNet models, we present an active continual learning approach that enables MPNet to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing data for training. We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently. | Biased sampling heuristics adaptively sample the robot state-space to overcome limitations caused by random uniform exploration in underlying SMP methods. For instance, P-RRT* @cite_15 @cite_61 incorporates artificial potential fields @cite_42 into RRT* to generate goal directed trees for rapid convergence to an optimal path solution. In similar vein, proposed the Informed-RRT* @cite_33 and BIT* (Batch Informed Trees) @cite_13 . Informed-RRT* defines an ellipsoidal region using RRT*'s initial path solution to adaptively sample the configuration space for optimal path planning. Despite improvements in computation time, Informed-RRT* suffers in situations where finding an initial path is itself challenging. On the other hand, BIT* is an incremental graph search method that instantiates a dynamically-changing ellipsoidal region for batch sampling to compute paths. Despite some improvements in computation speed, these biased sampling heuristics still suffer from the curse of dimensionality. | {
"cite_N": [
"@cite_61",
"@cite_33",
"@cite_42",
"@cite_15",
"@cite_13"
],
"mid": [
"2021464228",
"1976930960",
"2177274602",
"2211552581",
"1814533834"
],
"abstract": [
"The Rapidly Exploring Random Tree Star (RRT*) is an extension of the Rapidly Exploring Random Tree path finding algorithm. RRT* guarantees an optimal, collision free path solution but is limited by slow convergence rates and inefficient memory utilization. This paper presents APGD-RRT*, a variant of RRT* which utilizes Artificial Potential Fields to improve RRT* performance, providing relatively better convergence rates. Simulation results under different environments between the proposed APGD-RRT* and RRT* algorithms demonstrate this marked improvement under various test environments.",
"Rapidly-exploring random trees (RRTs) are pop- ular in motion planning because they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to every state in the planning domain. This behaviour is not only inefficient but also inconsistent with their single-query nature. For problems seeking to minimize path length, the subset of states that can improve a solution can be described by a prolate hyperspheroid. We show that unless this subset is sam- pled directly, the probability of improving a solution becomes arbitrarily small in large worlds or high state dimensions. In this paper, we present an exact method to focus the search by directly sampling this subset. The advantages of the presented sampling technique are demonstrated with a new algorithm, Informed RRT*. This method retains the same probabilistic guarantees on complete- ness and optimality as RRT* while improving the convergence rate and final solution quality. We present the algorithm as a simple modification to RRT* that could be further extended by more advanced path-planning algorithms. We show exper- imentally that it outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages while demonstrating less dependence on the state dimension and range of the planning problem.",
"",
"Rapidly-exploring Random Tree star (RRT*) is a recently proposed extension of Rapidly-exploring Random Tree (RRT) algorithm that provides a collision-free, asymptotically optimal path regardless of obstacles geometry in a given environment. However, one of the limitation in the RRT* algorithm is slow convergence to optimal path solution. As a result it consumes high memory as well as time due to the large number of iterations utilised in achieving optimal path solution. To overcome these limitations, we propose the potential function based-RRT* that incorporates the artificial potential field algorithm in RRT*. The proposed algorithm allows a considerable decrease in the number of iterations and thus leads to more efficient memory utilization and an accelerated convergence rate. In order to illustrate the usefulness of the proposed algorithm in terms of space execution and convergence rate, this paper presents rigorous simulation based comparisons between the proposed techniques and RRT* under different environmental conditions. Moreover, both algorithms are also tested and compared under non-holonomic differential constraints.",
"In this paper, we present Batch Informed Trees (BIT*), a planning algorithm based on unifying graph- and sampling-based planning techniques. By recognizing that a set of samples describes an implicit random geometric graph (RGG), we are able to combine the efficient ordered nature of graph-based techniques, such as A*, with the anytime scalability of sampling-based algorithms, such as Rapidly-exploring Random Trees (RRT)."
]
} |
1907.06013 | 2961188780 | This paper describes Motion Planning Networks (MPNet), a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and unseen environments. It receives environment information as point-clouds, as well as a robot's initial and desired goal configurations and recursively calls itself to bidirectionally generate connectable paths. In addition to finding directly connectable and near-optimal paths in a single pass, we show that worst-case theoretical guarantees can be proven if we merge this neural network strategy with classical sample-based planners in a hybrid approach while still retaining significant computational and optimality improvements. To learn the MPNet models, we present an active continual learning approach that enables MPNet to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing data for training. We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently. | Lazy edge evaluation methods, on the other hand, have shown to exhibit significant improvements in computation speeds by evaluating edges only along the potential path solutions. However, these methods are critically dependent on the underlying edge selector and tend to exhibit limited performance in cluttered environments @cite_10 . Bidirectional path generation improves the algorithm performance in narrow passages but still inherits the limitations of baseline SMPs @cite_39 @cite_58 . | {
"cite_N": [
"@cite_58",
"@cite_10",
"@cite_39"
],
"mid": [
"2877466430",
"",
"2086860466"
],
"abstract": [
"Abstract Rapidly-exploring Random Tree star (RRT*) has recently gained immense popularity in the motion planning community as it provides a probabilistically complete and asymptotically optimal solution without requiring the complete information of the obstacle space. In spite of all of its advantages, RRT* converges to optimal solution very slowly. Hence to improve the convergence rate, its bidirectional variants were introduced, the Bi-directional RRT* (B-RRT*) and Intelligent Bi-directional RRT* (IB-RRT*). However, as both variants perform pure exploration, they tend to suffer in highly cluttered environments. In order to overcome these limitations we introduce a new concept of potentially guided bidirectional trees in our proposed Potentially Guided Intelligent Bi-directional RRT* (PIB-RRT*) and Potentially Guided Bi-directional RRT* (PB-RRT*). The proposed algorithms greatly improve the convergence rate and have a more efficient memory utilization. Theoretical and experimental evaluation of the proposed algorithms have been made and compared to the latest state of the art motion planning algorithms under different challenging environmental conditions and have proven their remarkable improvement in efficiency and convergence rate.",
"",
"Abstract The sampling-based motion planning algorithm known as Rapidly-exploring Random Trees (RRT) has gained the attention of many researchers due to their computational efficiency and effectiveness. Recently, a variant of RRT called RRT* has been proposed that ensures asymptotic optimality. Subsequently its bidirectional version has also been introduced in the literature known as Bidirectional-RRT* (B-RRT*). We introduce a new variant called Intelligent Bidirectional-RRT* (IB-RRT*) which is an improved variant of the optimal RRT* and bidirectional version of RRT* (B-RRT*) algorithms and is specially designed for complex cluttered environments. IB-RRT* utilizes the bidirectional trees approach and introduces intelligent sample insertion heuristic for fast convergence to the optimal path solution using uniform sampling heuristics. The proposed algorithm is evaluated theoretically and experimental results are presented that compares IB-RRT* with RRT* and B-RRT*. Moreover, experimental results demonstrate the superior efficiency of IB-RRT* in comparison with RRT* and B-RRT in complex cluttered environments."
]
} |
1907.06013 | 2961188780 | This paper describes Motion Planning Networks (MPNet), a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and unseen environments. It receives environment information as point-clouds, as well as a robot's initial and desired goal configurations and recursively calls itself to bidirectionally generate connectable paths. In addition to finding directly connectable and near-optimal paths in a single pass, we show that worst-case theoretical guarantees can be proven if we merge this neural network strategy with classical sample-based planners in a hybrid approach while still retaining significant computational and optimality improvements. To learn the MPNet models, we present an active continual learning approach that enables MPNet to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing data for training. We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently. | There has also been attempts towards building learning-based motion planners. For instance, Value Iteration Networks (VIN) @cite_5 approximates a planner by emulating value iteration using recurrent convolutional neural networks and max-pooling. However, VIN is only applicable for discrete planning tasks. Universal Planning Networks (UPN) @cite_6 extends VIN to continuous control problems using gradient descent over generated actions to find a motion plan connecting the given start and goal observations. However, these methods do not generalize to novel environments or tasks and thus require frequent retraining. The most relevant approach to our neural planner (MPNet) is L2RRT @cite_60 that plans motion in learned latent spaces using RRT method. L2RRT learns state-space encoding model, agent's dynamics model, and collision checking model. However, it is unclear that existence of a path solution in configuration space will always imply the existence of a path in the learned latent space and vice versa. * -4mm | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_60"
],
"mid": [
"2258731934",
"2795756076",
"2885010347"
],
"abstract": [
"We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.",
"A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities.",
"This letter presents latent sampling-based motion planning (L-SBMP), a methodology toward computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this letter, we combine these recent advances with techniques from sampling-based motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system's states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space —we refer to this exploration algorithm as learned latent RRT. This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem."
]
} |
1907.05792 | 2957380905 | Goal-oriented dialog systems, which can be trained end-to-end without manually encoding domain-specific features, show tremendous promise in the customer support use-case e.g. flight booking, hotel reservation, technical support, student advising etc. These dialog systems must learn to interact with external domain knowledge to achieve the desired goal e.g. recommending courses to a student, booking a table at a restaurant etc. This paper presents extended Enhanced Sequential Inference Model (ESIM) models: a) K-ESIM (Knowledge-ESIM), which incorporates the external domain knowledge and b) T-ESIM (Targeted-ESIM), which leverages information from similar conversations to improve the prediction accuracy. Our proposed models and the baseline ESIM model are evaluated on the Ubuntu and Advising datasets in the Sentence Selection track of the latest Dialog System Technology Challenge (DSTC7), where the goal is to find the correct next utterance, given a partial conversation, from a set of candidates. Our preliminary results suggest that incorporating external knowledge sources and leveraging information from similar dialogs leads to performance improvements for predicting the next utterance. | There have been several studies on incorporating unstructured external information into dialog models. ghazvininejad2017knowledge proposed a knowledge-grounded neural conversation model by improving the seq2seq approach to produce more contentful responses. The model was trained using Twitter Dialog Dataset @cite_13 as dialog context and foursquare data as the external information. young2017augmenting incorporated structured knowledge from knowledge graphs and achieved an improved dialog system over the Twitter Dialog Dataset. lowe2015incorporating used the Linux manual pages as the external knowledge information for improving the next utterance prediction task and showed reasonable accuracy gains. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2963206148"
],
"abstract": [
"Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., I don’t know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations."
]
} |
1907.06005 | 2959491548 | The ever evolving informatics technology has gradually bounded human and computer in a compact way. Understanding user behavior becomes a key enabler in many fields such as sedentary-related healthcare, human-computer interaction (HCI) and affective computing. Traditional sensor-based and vision-based user behavior analysis approaches are obtrusive in general, hindering their usage in realworld. Therefore, in this article, we first introduce WiFi signal as a new source instead of sensor and vision for unobtrusive user behaviors analysis. Then we design BeSense, a contactless behavior analysis system leveraging signal processing and computational intelligence over WiFi channel state information (CSI). We prototype BeSense on commodity low-cost WiFi devices and evaluate its performance in realworld environments. Experimental results have verified its effectiveness in recognizing user behaviors. | WiFi-based behavior sensing technology has many advantages over traditional behavior sensing technology (e.g. vision-based sensing technology, infrared-based sensing technology and dedicated sensor based sensing technology) in terms of non-line-of-sight, passive sensing (no need to carry sensors), low cost, easy deployment, no restrictions on lighting conditions, and strong scalability. A large number of applications have emerged based on existing WiFi signals. From daily behavioral awareness @cite_18 @cite_4 and gesture recognition @cite_3 @cite_19 to identity authentication @cite_33 @cite_24 and from individual physiological indicators @cite_13 @cite_27 to group perception @cite_32 @cite_20 and fall detection @cite_21 @cite_11 , behavior sensing technology based on WiFi is showing unprecedented potential for application, achieving not only the interaction between machines and machines but also the natural interaction between humans and machines. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2172292165",
"",
"2345136132",
"",
"2035883018",
"2129149054",
"",
"",
"",
"2507908806",
"2108611194",
""
],
"abstract": [
"Activity monitoring in home environments has become increasingly important and has the potential to support a broad array of applications including elder care, well-being management, and latchkey child safety. Traditional approaches involve wearable sensors and specialized hardware installations. This paper presents device-free location-oriented activity identification at home through the use of existing WiFi access points and WiFi devices (e.g., desktops, thermostats, refrigerators, smartTVs, laptops). Our low-cost system takes advantage of the ever more complex web of WiFi links between such devices and the increasingly fine-grained channel state information that can be extracted from such links. It examines channel features and can uniquely identify both in-place activities and walking movements across a home by comparing them against signal profiles. Signal profiles construction can be semi-supervised and the profiles can be adaptively updated to accommodate the movement of the mobile devices and day-to-day signal calibration. Our experimental evaluation in two apartments of different size demonstrates that our approach can achieve over 96 average true positive rate and less than 1 average false positive rate to distinguish a set of in-place and walking activities with only a single WiFi access point. Our prototype also shows that our system can work with wider signal band (802.11ac) with even higher accuracy.",
"",
"There has been a growing interest in equipping the objects and environment surrounding users with sensing capabilities. Smart indoor spaces such as smart homes and offices can implement the sensing and processing functionality, relieving users from the need of wearing carrying smart devices. Enabling such smart spaces requires device-free, effortless sensing of user's identity and activities. Device-free sensing using WiFi has shown great potential in such scenarios, however, a fundamental question of person identification has remained unsolved. In this paper, we present WiWho, a framework that can identify a person from a small group of people in a device-free manner using WiFi. We show that Channel State Information (CSI) used in recent WiFi can identify a person's steps and walking gait. The walking gait being distinguishing characteristics for different people, WiWho uses CSI-based gait for person identification. We demonstrate how step and walk analysis can be used to identify a person's walking gait from CSI, and how this information can be used to identify a person. WiWho does not require a person to carry any device and is effortless since it only requires the person to walk for a few steps (e.g. entering a home or an office). We evaluate WiWho using experiments at multiple locations with a total of 20 volunteers, and show that it can identify a person with average accuracy of 92 to 80 from a group of 2 to 6 people respectively. We also show that in most cases walking as few as 2-3 meters is sufficient to recognize a person's gait and identify the person. We discuss the potential and challenges of WiFi-based person identification with respect to smart space applications.",
"",
"Substantial progress in WiFi-based indoor localization has proven that pervasiveness of WiFi can be exploited beyond its traditional use of internet access to enable a variety of sensing applications. Understanding shopper's behavior through physical analytics can provide crucial insights to the business owner in terms of effectiveness of promotions, arrangement of products and efficiency of services. However, analyzing shopper's behavior and browsing patterns is challenging. Since video surveillance can not used due to high cost and privacy concerns, it is necessary to design novel techniques that can provide accurate and efficient view of shopper's behavior. In this work, we propose WiFi-based sensing of shopper's behavior in a retail store. Specifically, we show that various states of a shopper such as standing near the entrance to view a promotion or walking quickly to proceed towards the intended item can be accurately classified by profiling Channel State Information (CSI) of WiFi. We recognize a few representative states of shopper's behavior at the entrance and inside the store, and show how CSI-based profile can be used to detect that a shopper is in one of the states with very high accuracy (≈ 90 ). We discuss the potential and limitations of CSI-based sensing of shopper's behavior and physical analytics in general.",
"We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modifications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5 using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy increases to 96 using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96 . This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesture-based interaction with mobile devices.",
"",
"",
"",
"Recent research has demonstrated the feasibility of detecting human respiration rate non-intrusively leveraging commodity WiFi devices. However, is it always possible to sense human respiration no matter where the subject stays and faces? What affects human respiration sensing and what's the theory behind? In this paper, we first introduce the Fresnel model in free space, then verify the Fresnel model for WiFi radio propagation in indoor environment. Leveraging the Fresnel model and WiFi radio propagation properties derived, we investigate the impact of human respiration on the receiving RF signals and develop the theory to relate one's breathing depth, location and orientation to the detectability of respiration. With the developed theory, not only when and why human respiration is detectable using WiFi devices become clear, it also sheds lights on understanding the physical limit and foundation of WiFi-based sensing systems. Intensive evaluations validate the developed theory and case studies demonstrate how to apply the theory to the respiration monitoring system design.",
"In this paper, we are interested in counting the total number of people walking in an area based on only WiFi received signal strength indicator (RSSI) measurements between a pair of stationary transmitter receiver antennas. We propose a framework based on understanding two important ways that people leave their signature on the transmitted signal: blocking the line of sight (LOS) and scattering effects. By developing a simple motion model, we first mathematically characterize the impact of the crowd on blocking the LOS. We next probabilistically characterize the impact of the total number of people on the scattering effects and the resulting multipath fading component. By putting the two components together, we then develop a mathematical expression for the probability distribution of the received signal amplitude as a function of the total number of occupants, which will be the base for our estimation using Kullback-Leibler divergence. To confirm our framework, we run extensive indoor and outdoor experiments with up to and including nine people and show that the proposed framework can estimate the total number of people with a good accuracy with only a pair of WiFi cards and the corresponding RSSI measurements.",
""
]
} |
1907.06005 | 2959491548 | The ever evolving informatics technology has gradually bounded human and computer in a compact way. Understanding user behavior becomes a key enabler in many fields such as sedentary-related healthcare, human-computer interaction (HCI) and affective computing. Traditional sensor-based and vision-based user behavior analysis approaches are obtrusive in general, hindering their usage in realworld. Therefore, in this article, we first introduce WiFi signal as a new source instead of sensor and vision for unobtrusive user behaviors analysis. Then we design BeSense, a contactless behavior analysis system leveraging signal processing and computational intelligence over WiFi channel state information (CSI). We prototype BeSense on commodity low-cost WiFi devices and evaluate its performance in realworld environments. Experimental results have verified its effectiveness in recognizing user behaviors. | In the early days, WiFi-based behavior sensing mainly uses RSS (Received Signal Strength). Sigg @cite_7 use a software radio to transmit RF signals and determine human motion based on changes in RSS. Abdelnasser leverage RSS to identify 7 different gestures @cite_3 and respiratory detection @cite_22 . We also built a similar RSS-based system PAWS to handle whole-body activities @cite_28 . Since RSS is coarse-grained and CSI can yield more detailed information, recent research mainly uses CSI for behavior sensing. Wifall @cite_8 uses CSI to implement the fall detection system. @cite_32 leverage CSI to recognize five different customer behavior states. We also built a CSI-based system MoSense to pinpoint the motions in a real-time manner @cite_16 . | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_16"
],
"mid": [
"2085522395",
"2031972300",
"2338892592",
"2344637340",
"2035883018",
"2129149054",
"2756489842"
],
"abstract": [
"Monitoring breathing rates and patterns helps in the diagnosis and potential avoidance of various health problems. Current solutions for respiratory monitoring, however, are usually invasive and or limited to medical facilities. In this paper, we propose a novel respiratory monitoring system, UbiBreathe, based on ubiquitous off-the-shelf WiFi-enabled devices. Our experiments show that the received signal strength (RSS) at a WiFi-enabled device held on a person's chest is affected by the breathing process. This effect extends to scenarios when the person is situated on the line-of-sight (LOS) between the access point and the device, even without holding it. UbiBreathe leverages these changes in the WiFi RSS patterns to enable ubiquitous non-invasive respiratory rate estimation, as well as apnea detection. We propose the full architecture and design for UbiBreathe, incorporating various modules that help reliably extract the hidden breathing signal from a noisy WiFi RSS. The system handles various challenges such as noise elimination, interfering humans, sudden user movements, as well as detecting abnormal breathing situations. Our implementation of UbiBreathe using off-the-shelf devices in a wide range of environmental conditions shows that it can estimate different breathing rates with less than 1 breaths per minute (bpm) error. In addition, UbiBreathe can detect apnea with more than 96 accuracy in both the device-on-chest and hands-free scenarios. This highlights its suitability for a new class of anywhere respiratory monitoring.",
"We consider the recognition of activities from passive entities by analysing radio-frequency (RF)-channel fluctuation. In particular, we focus on the recognition of activities by active Software-defined-radio (SDR)-based Device-free Activity Recognition (DFAR) systems and investigate the localisation of activities performed, the generalisation of features for alternative environments and the distinction between walking speeds. Furthermore, we conduct case studies for Received Signal Strength (RSS)-based active and continuous signal-based passive systems to exploit the accuracy decrease in these related cases. All systems are compared to an accelerometer-based recognition system.",
"Injuries that are caused by falls have been regarded as one of the major health threats to the independent living for the elderly. Conventional fall detection systems have various limitations. In this work, we first look for the correlations between different radio signal variations and activities by analyzing radio propagation model. Based on our observation, we propose WiFall, a truly unobtrusive fall detection system. WiFall employs physical layer Channel State Information (CSI) as the indicator of activities. It can detect fall of the human without hardware modification, extra environmental setup, or any wearable device. We implement WiFall on desktops equipped with commodity 802.11n NIC, and evaluate the performance in three typical indoor scenarios with several layouts of transmitter-receiver (Tx-Rx) links. In our area of interest, WiFall can achieve fall detection for a single person with high accuracy. As demonstrated by the experimental results, WiFall yields 90 percent detection precision with a false alarm rate of 15 percent on average using a one-class SVM classifier in all testing scenarios. It can also achieve average 94 percent fall detection precisions with 13 percent false alarm using Random Forest algorithm.",
"Indoor human activity recognition remains a hot topic and receives tremendous research efforts during the last few decades. However, previous solutions either rely on special hardware, or demand the cooperation of subjects. Therefore, the scalability issue remains a great challenge. To this end, we present an online activity recognition system, which explores WiFi ambient signals for received signal strength indicator (RSSI) fingerprint of different activities. It can be integrated into any existing WLAN networks without additional hardware support. Also, it does not need the subjects to be cooperative during the recognition process. More specifically, we first conduct an empirical study to gain in-depth understanding of WiFi characteristics, e.g., the impact of activities on the WiFi RSSI. Then, we present an online activity recognition architecture that is flexible and can adapt to different settings conditions scenarios. Lastly, a prototype system is built and evaluated via extensive real-world experiments. A novel fusion algorithm is specifically designed based on the classification tree to better classify activities with similar signatures. Experimental results show that the fusion algorithm outperforms three other well-known classifiers [i.e., NaiveBayes, Bagging, and k-nearest neighbor (k-NN)] in terms of accuracy and complexity. Important sights and hands-on experiences have been obtained to guide the system implementation and outline future research directions.",
"Substantial progress in WiFi-based indoor localization has proven that pervasiveness of WiFi can be exploited beyond its traditional use of internet access to enable a variety of sensing applications. Understanding shopper's behavior through physical analytics can provide crucial insights to the business owner in terms of effectiveness of promotions, arrangement of products and efficiency of services. However, analyzing shopper's behavior and browsing patterns is challenging. Since video surveillance can not used due to high cost and privacy concerns, it is necessary to design novel techniques that can provide accurate and efficient view of shopper's behavior. In this work, we propose WiFi-based sensing of shopper's behavior in a retail store. Specifically, we show that various states of a shopper such as standing near the entrance to view a promotion or walking quickly to proceed towards the intended item can be accurately classified by profiling Channel State Information (CSI) of WiFi. We recognize a few representative states of shopper's behavior at the entrance and inside the store, and show how CSI-based profile can be used to detect that a shopper is in one of the states with very high accuracy (≈ 90 ). We discuss the potential and limitations of CSI-based sensing of shopper's behavior and physical analytics in general.",
"We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modifications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5 using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy increases to 96 using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96 . This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesture-based interaction with mobile devices.",
"Motion is a critical indicator of human presence and activities. Recent developments in the field of indoor motion detection have revealed their potentials in enhancing our living experiences through applications like intrusion detection and sleep monitoring. However, existing solutions still face several critical downsides such as the availability (specialized hardware), reliability (illumination and line-of-sight constraints), and privacy issues (being watched). To overcome such shortages, a radio frequency (RF) based device-free motion detection system (MoSense) is designed via leveraging the attenuation of ubiquitous WiFi signals induced by motions to deliver a reliable and transparent detection service in realtime. The design and implementation of MoSense face two challenges: 1) characterizing stationary states and 2) the noisy subcarriers. For the first challenge, a silence analysis model is proposed to characterize stationary states for distinguishing motions. For the second challenge, we design a distance-based mechanism to select certain subcarriers that better capture the impact of motions from the noisy channel through measuring the similarity between subcarriers. A prototype of MoSense is realized and evaluated in real environments. By comparing MoSense with other two state-of-the-art systems, i.e., FIMD and FRID, we have shown that MoSense is superior in terms of precision, false negative rate and computational complexity. Considering that MoSense is compatible with existing WiFi infrastructure, it constitutes a low-cost yet promising solution for motion detection."
]
} |
1907.06005 | 2959491548 | The ever evolving informatics technology has gradually bounded human and computer in a compact way. Understanding user behavior becomes a key enabler in many fields such as sedentary-related healthcare, human-computer interaction (HCI) and affective computing. Traditional sensor-based and vision-based user behavior analysis approaches are obtrusive in general, hindering their usage in realworld. Therefore, in this article, we first introduce WiFi signal as a new source instead of sensor and vision for unobtrusive user behaviors analysis. Then we design BeSense, a contactless behavior analysis system leveraging signal processing and computational intelligence over WiFi channel state information (CSI). We prototype BeSense on commodity low-cost WiFi devices and evaluate its performance in realworld environments. Experimental results have verified its effectiveness in recognizing user behaviors. | These schemes basically rely on high-level features like location, velocity and direction of a motion for behavior computing. In particular, Wisee @cite_9 and Widance @cite_26 use the Doppler shift to extract the direction of motion and then use this direction information to classify different motions. Widir @cite_12 uses Fresnel Zone theory to extract direction and distance information to identify the direction of walking. CRAM @cite_2 uses time-frequency analysis and DWT (Discrete Wavelet Transformation) to extract velocity information and HMM (Hidden Markov Model) to achieve behavior recognition. Using high-level features for behavioral recognition is more reasonable than using statistical features. | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_12",
"@cite_2"
],
"mid": [
"2091196730",
"2611241975",
"2507200569",
"2095396347"
],
"abstract": [
"This demo presents WiSee, a novel human-computer interaction system that leverages wireless networks (e.g., Wi-Fi), to enable sensing and recognition of human gestures and motion. Since wire- less signals do not require line-of-sight and can traverse through walls, WiSee enables novel human-computer interfaces for remote device control and building automation. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices. We integrate WiSee with applications and demonstrate how WiSee enables users to use gestures and control applications including music players and gaming systems. Specifically, our demo will allow SIGCOMM attendees to control a music player and a lighting control device using gestures.",
"In-air interaction acts as a key enabler for ambient intelligence and augmented reality. As an increasing popular example, exergames, and the alike gesture recognition applications, have attracted extensive research in designing accurate, pervasive and low-cost user interfaces. Recent advances in wireless sensing show promise for a ubiquitous gesture-based interaction interface with Wi-Fi. In this work, we extract complete information of motion-induced Doppler shifts with only commodity Wi-Fi. The key insight is to harness antenna diversity to carefully eliminate random phase shifts while retaining relevant Doppler shifts. We further correlate Doppler shifts with motion directions, and propose a light-weight pipeline to detect, segment, and recognize motions without training. On this basis, we present WiDance, a Wi-Fi-based user interface, which we utilize to design and prototype a contactless dance-pad exergame. Experimental results in typical indoor environment demonstrate a superior performance with an accuracy of 92 , remarkably outperforming prior approaches.",
"Despite its importance, walking direction is still a key context lacking a cost-effective and continuous solution that people can access in indoor environments. Recently, device-free sensing has attracted great attention because these techniques do not require the user to carry any device and hence could enable many applications in smart homes and offices. In this paper, we present WiDir, the first system that leverages WiFi wireless signals to estimate a human's walking direction, in a device-free manner. Human motion changes the multipath distribution and thus WiFi Channel State Information at the receiver end. WiDir analyzes the phase change dynamics from multiple WiFi subcarriers based on Fresnel zone model and infers the walking direction. We implement a proof-of-concept prototype using commercial WiFi devices and evaluate it in both home and office environments. Experimental results show that WiDir can estimate human walking direction with a median error of less than 10 degrees.",
"Some pioneer WiFi signal based human activity recognition systems have been proposed. Their key limitation lies in the lack of a model that can quantitatively correlate CSI dynamics and human activities. In this paper, we propose CARM, a CSI based human Activity Recognition and Monitoring system. CARM has two theoretical underpinnings: a CSI-speed model, which quantifies the correlation between CSI value dynamics and human movement speeds, and a CSI-activity model, which quantifies the correlation between the movement speeds of different human body parts and a specific human activity. By these two models, we quantitatively build the correlation between CSI value dynamics and a specific human activity. CARM uses this correlation as the profiling mechanism and recognizes a given activity by matching it to the best-fit profile. We implemented CARM using commercial WiFi devices and evaluated it in several different environments. Our results show that CARM achieves an average accuracy of greater than 96 ."
]
} |
1907.06005 | 2959491548 | The ever evolving informatics technology has gradually bounded human and computer in a compact way. Understanding user behavior becomes a key enabler in many fields such as sedentary-related healthcare, human-computer interaction (HCI) and affective computing. Traditional sensor-based and vision-based user behavior analysis approaches are obtrusive in general, hindering their usage in realworld. Therefore, in this article, we first introduce WiFi signal as a new source instead of sensor and vision for unobtrusive user behaviors analysis. Then we design BeSense, a contactless behavior analysis system leveraging signal processing and computational intelligence over WiFi channel state information (CSI). We prototype BeSense on commodity low-cost WiFi devices and evaluate its performance in realworld environments. Experimental results have verified its effectiveness in recognizing user behaviors. | Since human behavior is complex and fuzzy in nature, recently there is a trend to explore computational intelligence, which is a specialized paradigm to deal with this kind of problems @cite_23 @cite_31 @cite_6 . These methods extract features or directly input waveform data into the deep network model and train the network model for motion recognition. For example, Li adopt a multi-layer convolutional neural network (CNN) for learning human activities using WiFi CSI from multiple access points (APs) @cite_23 . This deep learning-based recognition has higher accuracy than traditional solutions but requires substantial training data and training time. Also they are unable to deal with micro-gestures since the corresponding changes in WiFi CSI are insignificant or even trivial sometime. | {
"cite_N": [
"@cite_31",
"@cite_6",
"@cite_23"
],
"mid": [
"2765756443",
"",
"2803428049"
],
"abstract": [
"Identifying line-of-sight (LOS) and non-LOS channel conditions can improve the performance of many wireless applications, such as signal strength-based localization algorithms. For this purpose, channel state information (CSI) obtained by commodity IEEE 802.11n devices can be used, because it contains information about channel impulse response (CIR). However, because of the limited sampling rate of the devices, a high-resolution CIR is not available, and it is difficult to detect the existence of an LOS path from a single CSI measurement, but it can be inferred from the variation pattern of CSI over time. To this end, we propose a recurrent neural network (RNN) model, which takes a series of CSI to identify the corresponding channel condition. We collect numerous measurement data under an indoor office environment, train the proposed RNN model, and compare the performance with those of existing schemes that use handcrafted features. The proposed method efficiently learns a nonlinear relationship between input and output, and thus, yields high accuracy even for data obtained in a very short period.",
"",
"Wi-Fi channel state information (CSI) provides adequate information for recognizing and analyzing human activities. Because of the short distance and low transmit power of Wi-Fi communications, people usually deploy multiple access points (APs) in a small area. Traditional Wi-Fi CSI-based human activity recognition methods adopt Wi-Fi CSI from a single AP, which is not very appropriate for a high-density Wi-Fi environment. In this article, we propose a learning method that analyzes the CSI of multiple APs in a small area to detect and recognize human activities. We introduce a deep learning model to process complex and large CSI from multiple APs. From extensive experiment results, our method performs better than other solutions in a given environment where multiple Wi-Fi APs exist."
]
} |
1907.06005 | 2959491548 | The ever evolving informatics technology has gradually bounded human and computer in a compact way. Understanding user behavior becomes a key enabler in many fields such as sedentary-related healthcare, human-computer interaction (HCI) and affective computing. Traditional sensor-based and vision-based user behavior analysis approaches are obtrusive in general, hindering their usage in realworld. Therefore, in this article, we first introduce WiFi signal as a new source instead of sensor and vision for unobtrusive user behaviors analysis. Then we design BeSense, a contactless behavior analysis system leveraging signal processing and computational intelligence over WiFi channel state information (CSI). We prototype BeSense on commodity low-cost WiFi devices and evaluate its performance in realworld environments. Experimental results have verified its effectiveness in recognizing user behaviors. | To this end, we present BeSense, which leverages signal processing to handle the micro-gestures and computational intelligence for behavior analysis. It extends our previous work @cite_14 with non-trivial improvement. In particular, we have improved the signal segmentation algorithm, which results in better performance than previous method based on variance thresholds continues. In contrast to the previous gesture-frequency-based behavior recognition method, we adopt a HMM-based recognition method for behavior recognition, which can provide better performances. We also conduct a series of experiments to find alternatives with similar influences on the channel response to simulate human movements. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2916709516"
],
"abstract": [
"In this paper, we present WoSense, a device-free and real-time behavior analysis system leveraging only WiFi infrastructures. WoSense aims to remotely recognize various human behaviors like surfing, gaming and working around computers, which are considered to be an essential part of our daily lives both at work and at home. The key of WoSense is to exploit the signal distortions on channel data caused by gestures like finger and hand movements, and then identify possible behaviors via the composite of gestures. Therefore, two critical challenges need to be tackled: how to enhance such insignificant distortions led by micro gestures, how to segment the continuous signals according to different gestures in a real-time manner? For the former, instead of relying on empirical studies like our rivals, WoSense offers a Fresnel zone based model with theoretic understandings between the gestures and signal distortions. For the latter, WoSense employs a light-weight automatic segmentation algorithm exploring the variance feature of channel data. We prototype WoSense on the commodity low-cost WiFi devices and evaluate its performance in extensive real- world experiments. WoSense achieves an average 96.77 accuracy for distinguishing the typing and mousing gestures, and 92.5 accuracy for recognizing four different behaviors, i.e., stationary, surfing, gaming and working."
]
} |
1907.06071 | 2971318102 | Dense depth completion is essential for autonomous systems and 3D reconstruction. In this paper, a lightweight yet efficient network (S &CNet) is proposed to obtain a good trade-off between efficiency and accuracy for the dense depth completion. A dual-stream attention module (S &C enhancer) is introduced to measure both spatial-wise and the channel-wise global-range relationship of extracted features so as to improve the performance. A coarse-to-fine network is designed and the proposed S &C enhancer is plugged into the coarse estimation network between its encoder and decoder network. Experimental results demonstrate that our approach achieves competitive performance with existing works on KITTI dataset but almost four times faster. The proposed S &C enhancer can be plugged into other existing works and boost their performance significantly with a negligible additional computational cost. | From past to now, there has been a significant effort in prior works to design efficient networks. For image classification task, the MobileNet @cite_0 can achieve comparable top-5 accuarcy with VGG-16 @cite_8 on ImageNet benchmark with a @math times fewer weights. Then, ShuffleNet @cite_6 further boost the performance by adopting channel shuffle mechanism. Moreover, with the help of inverted residual structure, MobileNetV2 @cite_18 can outperform MobileNet and ShuffleNet with even fewer parameters. Recently, MobileNetV3 @cite_14 further improve the performance of efficient network by introducing the squeeze-and-excitation module. For object detection, YOLO-V3 @cite_32 runs @math times faster than Faster-RCNN @cite_22 and can get higher mean average precision. For semantic segmentation task, ERFNet @cite_11 employed the residual factorized convolution module, and can outperform FCN @cite_27 with a @math times running speed. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_11"
],
"mid": [
"2963163009",
"2944779197",
"639708223",
"2962835968",
"2796347433",
"2091600398",
"2612445135",
"2952632681",
"2762439315"
],
"abstract": [
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.",
"We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2 more accurate on ImageNet classification while reducing latency by 15 compared to MobileNetV2. MobileNetV3-Small is 4.6 more accurate while reducing latency by 5 compared to MobileNetV2. MobileNetV3-Large detection is 25 faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30 faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL",
"A multihop wavelength-division multiplexing (WDM) approach, referred to as Shuffle Net, for achieving concurrency in distributed lightwave networks is proposed. A Shuffle Net can be configua19 red with each user having as few as one fixed-wavelength transmitter and one fixed-wavelength receiver, avoiding both wavelength-agility and pretransmission-coordination problems. The network can achieve at least 40 of the maximum efficiency possible with wavelength-agile transmitters and receivers. To transmit a packet from one user to another may require routing the packet through intermediate users, each repeating the packet on a new wavelength until the packet is finally transmitted on a wavelength that the destination user receives. For such a multihop lightwave network, the transmit and receive wavelengths must be assigned to users to provide both a path between all users and the efficient utilization of all wavelength channels. >",
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Semantic segmentation is a challenging task that addresses most of the perception needs of intelligent vehicles (IVs) in an unified way. Deep neural networks excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at pixel level. However, a good tradeoff between high quality and computational resources is yet not present in the state-of-the-art semantic segmentation approaches, limiting their application in real vehicles. In this paper, we propose a deep architecture that is able to run in real time while providing accurate semantic segmentation. The core of our architecture is a novel layer that uses residual connections and factorized convolutions in order to remain efficient while retaining remarkable accuracy. Our approach is able to run at over 83 FPS in a single Titan X, and 7 FPS in a Jetson TX1 (embedded device). A comprehensive set of experiments on the publicly available Cityscapes data set demonstrates that our system achieves an accuracy that is similar to the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. The resulting tradeoff makes our model an ideal approach for scene understanding in IV applications. The code is publicly available at: https: github.com Eromera erfnet"
]
} |
1901.04452 | 2909445821 | The FoundationDB Record Layer is an open source library that provides a record-oriented data store with semantics similar to a relational database implemented on top of FoundationDB, an ordered, transactional key-value store. The Record Layer provides a lightweight, highly extensible way to store structured data. It offers schema management and a rich set of query and indexing facilities, some of which are not usually found in traditional relational databases, such as nested record types, indexes on commit versions, and indexes that span multiple record types. The Record Layer is stateless and built for massive multi-tenancy, encapsulating and isolating all of a tenant's state, including indexes, into a separate logical database. We demonstrate how the Record Layer is used by CloudKit, Apple's cloud backend service, to provide powerful abstractions to applications serving hundreds of millions of users. CloudKit uses the Record Layer to host billions of independent databases, many with a common schema. Features provided by the Record Layer enable CloudKit to provide richer APIs and stronger semantics with reduced maintenance overhead and improved scalability. | There is a broad literature on query optimization, starting with the seminal work of @cite_13 on System R. Since then, much of the focus has been on efficient search-space exploration. Most notably, Cascades @cite_15 introduced a clean separation of logical and physical query plans, and proposed operators and transformation rules that are encapsulated as self-contained components. Cascades allows logically equivalent expressions to be grouped in the so called Memo structure to eliminate redundant work. Recently, Greenplum's Orca query optimizer @cite_7 was developed as a modern incarnation of Cascades' principles. We are currently in the process of developing an optimizer that uses the proven principles of Cascades, paving the way for the development of a full cost-based optimizer (Appendix ). | {
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_7"
],
"mid": [
"135863099",
"2153329411",
"2051756193"
],
"abstract": [
"This paper describes a new extensible query optimization framework that resolves many of the shortcomings of the EXODUS and Volcano optimizer generators. In addition to extensibility, dynamic programming, and memorization based on and extended from the EXODUS and Volcano prototypes, this new optimizer provides (i) manipulation of operator arguments using rules or functions, (ii) operators that are both logical and physical for predicates etc., (iii) schema-specific rules for materialized views, (iv) rules to insert ”enforcers” or ”glue operators,” (v) rule-specific guidance, permitting grouping of rules, (vi) basic facilities that will later permit parallel search, partially ordered cost measures, and dynamic plans, (vii) extensive tracing support, and (viii) a clean interface and implementation making full use of the abstraction mechanisms of C++. We describe and justify our design choices for each of these issues. The optimizer system described here is operational and will serve as the foundation for new query optimizers in Tandem’s NonStop SQL product and in Microsoft’s SQL Server product.",
"In a high level query and data manipulation language such as SQL, requests are stated non-procedurally, without reference to access paths. This paper describes how System R chooses access paths for both simple (single relation) and complex queries (such as joins), given a user specification of desired data as a boolean expression of predicates. System R is an experimental database management system developed to carry out research on the relational model of data. System R was designed and built by members of the IBM San Jose Research Laboratory.",
"The performance of analytical query processing in data management systems depends primarily on the capabilities of the system's query optimizer. Increased data volumes and heightened interest in processing complex analytical queries have prompted Pivotal to build a new query optimizer. In this paper we present the architecture of Orca, the new query optimizer for all Pivotal data management products, including Pivotal Greenplum Database and Pivotal HAWQ. Orca is a comprehensive development uniting state-of-the-art query optimization technology with own original research resulting in a modular and portable optimizer architecture. In addition to describing the overall architecture, we highlight several unique features and present performance comparisons against other systems."
]
} |
1901.04268 | 2909259341 | In this paper, we propose to learn shared semantic space with correlation alignment ( @math ) for multimodal data representations, which aligns nonlinear correlations of multimodal data distributions in deep neural networks designed for heterogeneous data. In the context of cross-modal (event) retrieval, we design a neural network with convolutional layers and fully-connected layers to extract features for images, including images on Flickr-like social media. Simultaneously, we exploit a fully-connected neural network to extract semantic features for texts, including news articles from news media. In particular, nonlinear correlations of layer activations in the two neural networks are aligned with correlation alignment during the joint training of the networks. Furthermore, we project the multimodal data into a shared semantic space for cross-modal (event) retrieval, where the distances between heterogeneous data samples can be measured directly. In addition, we contribute a Wiki-Flickr Event dataset, where the multimodal data samples are not describing each other in pairs like the existing paired datasets, but all of them are describing semantic events. Extensive experiments conducted on both paired and unpaired datasets manifest the effectiveness of @math , outperforming the state-of-the-art methods. | Statistical correlation models are designed to learn a subspace where cross-modal data are aligned from the perspective of statistics. Canonical correlation analysis (CCA) @cite_6 is a representative work, which projects two sets of data to a common subspace where their correlations are maximized. Similar to CCA, cross-modal factor analysis (CFA) @cite_2 minimized the Frobenius norm between the transformed cross-modal data. Kernel-CCA @cite_6 introduced kernel functions for nonlinear correlations, which is the kernel extension of CCA. @cite_26 learned a semantic space using CCA representation and semantic category information for cross-modal retrieval tasks. @cite_1 proposed a generalized multi-view analysis (GMA) as a supervised extension of CCA. Multi-view CCA @cite_14 extended CCA by incorporating the high-level image semantic keywords as the third view. Multi-label CCA @cite_15 took the multi-label annotations to establish correspondences, without relying on the pairwise modalities like CCA. As pointed out by @cite_20 , using CCA directly may lead to coarse subspace, and the relationships between real data are too complicated to be captured by linear projections alone. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_20"
],
"mid": [
"2070753207",
"2106277773",
"2071207147",
"",
"2053667957",
"2210322478",
"2469619714"
],
"abstract": [
"This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.",
"The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution and is applicable to any domain. GMA exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program (QCQP), which can be solved efficiently as a generalized eigenvalue problem. GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace. Intuitively, GMA is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval. The proposed approach is general and has the potential to replace CCA whenever classification or retrieval is the purpose and label information is available. We outperform previous approaches for textimage retrieval on Pascal and Wiki text-image data. We report state-of-the-art results for pose and lighting invariant face recognition on the MultiPIE face dataset, significantly outperforming other approaches.",
"",
"Multimodal information processing has received considerable attention in recent years. The focus of existing research in this area has been predominantly on the use of fusion technology. In this paper, we suggest that cross-modal association can provide a new set of powerful solutions in this area. We investigate different cross-modal association methods using the linear correlation model. We also introduce a novel method for cross-modal association called Cross-modal Factor Analysis (CFA). Our earlier work on Latent Semantic Indexing (LSI) is extended for applications that use off-line supervised training. As a promising research direction and practical application of cross-modal association, cross-modal information retrieval where queries from one modality are used to search for content in another modality using low-level features is then discussed in detail. Different association methods are tested and compared using the proposed cross-modal retrieval system. All these methods achieve significant dimensionality reduction. Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads. The CFA method achieves 91.1 detection accuracy, while LSI and Canonical Correlation Analysis (CCA) achieve 66.1 and 73.9 accuracy, respectively. As shown by experiments, cross-modal association provides many useful benefits, such as robust noise resistance and effective feature selection. Compared to CCA and LSI, the proposed CFA shows several advantages in analysis performance and feature usage. Its capability in feature selection and noise resistance also makes CFA a promising tool for many multimedia analysis applications.",
"In this work, we address the problem of cross-modal retrieval in presence of multi-label annotations. In particular, we introduce multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations. Unlike CCA, ml-CCA does not rely on explicit pairing between modalities, instead it uses the multi-label information to establish correspondences. This results in a discriminative subspace which is better suited for cross-modal retrieval tasks. We also present Fast ml-CCA, a computationally efficient version of ml-CCA, which is able to handle large scale datasets. We show the efficacy of our approach by conducting extensive cross-modal retrieval experiments on three standard benchmark datasets. The results show that the proposed approach achieves state of the art retrieval performance on the three datasets.",
"Cross-modal tasks occur naturally for multimedia content that can be described along two or more modalities like visual content and text. Such tasks require to \"translate\" information from one modality to another. Methods like kernelized canonical correlation analysis (KCCA) attempt to solve such tasks by finding aligned subspaces in the description spaces of different modalities. Since they favor correlations against modality-specific information, these methods have shown some success in both cross-modal and bi-modal tasks. However, we show that a direct use of the subspace alignment obtained by KCCA only leads to coarse translation abilities. To address this problem, we first put forward a new representation method that aggregates information provided by the projections of both modalities on their aligned subspaces. We further suggest a method relying on neighborhoods in these subspaces to complete uni-modal information. Our proposal exhibits state-of-the-art results for bi-modal classification on Pascal VOC07 and improves it by over 60 for cross-modal retrieval on FlickR 8K 30K."
]
} |
1901.04268 | 2909259341 | In this paper, we propose to learn shared semantic space with correlation alignment ( @math ) for multimodal data representations, which aligns nonlinear correlations of multimodal data distributions in deep neural networks designed for heterogeneous data. In the context of cross-modal (event) retrieval, we design a neural network with convolutional layers and fully-connected layers to extract features for images, including images on Flickr-like social media. Simultaneously, we exploit a fully-connected neural network to extract semantic features for texts, including news articles from news media. In particular, nonlinear correlations of layer activations in the two neural networks are aligned with correlation alignment during the joint training of the networks. Furthermore, we project the multimodal data into a shared semantic space for cross-modal (event) retrieval, where the distances between heterogeneous data samples can be measured directly. In addition, we contribute a Wiki-Flickr Event dataset, where the multimodal data samples are not describing each other in pairs like the existing paired datasets, but all of them are describing semantic events. Extensive experiments conducted on both paired and unpaired datasets manifest the effectiveness of @math , outperforming the state-of-the-art methods. | The intuition of ranking models is that relevant pairs in the retrieved results should rank higher than the irrelevant pairs @cite_12 . Recently, neural networks combined with ranking loss become popular in the context of cross-modal retrieval. @cite_10 @cite_22 designed a two-branch neural network with multiple layers of linear projections followed by nonlinearities, and learned the joint embeddings for images and texts with a large margin objective which combines ranking constraints and neighborhood structure preserving constraints. @cite_33 introduced a large-scale recipe dataset, and jointly learned the embeddings of images and recipes in a common space by maximizing the cosine similarity between positive recipe-image pairs and minimizing the similarity between negative pairs. @cite_13 designed a sampling strategy and define discriminative ranking loss on two heterogeneous networks to obtain discriminative embeddings for cross-modal retrieval. However, the training of ranking loss relies on high-quality and unambiguous data pairs, which may not be appropriate for unpaired datasets, like our Wiki-Flickr Event dataset. | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_10",
"@cite_13",
"@cite_12"
],
"mid": [
"2737041163",
"",
"2963389687",
"2765977864",
"2025423507"
],
"abstract": [
"In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general. Code, data and models are publicly available",
"",
"This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a largemargin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and textto-image retrieval. Our method achieves new state-of-theart results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.",
"This paper proposes a novel deep framework of multi-networks joint learning for large-scale cross-modal retrieval. For most existing cross-modal methods, the processes of training and testing don't care about the problem of memory requirement. Hence, they are generally implemented on small-scale data. Moreover, they take feature learning and latent space embedding as two separate steps which cannot generate specific features to accord with the cross-modal task. To alleviate the problems, we first disintegrate the multiplication and inverse of some big matrices, usually involved in existing methods, into that of many sub-matrices. Each sub-matrix is targeted to dispose one pair of image-sentence, for which we further design a novel sampling strategy to select the most representative samples to construct the cross-modal ranking loss and within-modal discriminant loss functions. By this way, the proposed model consumes less memory each time such that it can scale to large-scale data. Furthermore, we apply the proposed discriminative ranking loss to effectively unify two heterogenous networks, deep residual network for images and long short-term memory for sentences, into an end-to-end deep learning architecture. Finally, we can simultaneously achieve specific features adapting to cross-modal task and learn a shared latent space for images and sentences. Extensive evaluations on two large-scale cross-modal datasets show that the proposed method brings substantial improvements over other state-of-the-art ranking methods.",
"In this article we present Supervised Semantic Indexing which defines a class of nonlinear (quadratic) models that are discriminatively trained to directly map from the word content in a query-document or document-document pair to a ranking score. Like Latent Semantic Indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However, unlike LSI our models are trained from a supervised signal directly on the ranking task of interest, which we argue is the reason for our superior results. As the query and target texts are modeled separately, our approach is easily generalized to different retrieval tasks, such as cross-language retrieval or online advertising placement. Dealing with models on all pairs of words features is computationally challenging. We propose several improvements to our basic model for addressing this issue, including low rank (but diagonal preserving) representations, correlated feature hashing and sparsification. We provide an empirical study of all these methods on retrieval tasks based on Wikipedia documents as well as an Internet advertisement task. We obtain state-of-the-art performance while providing realistically scalable methods."
]
} |
1901.04379 | 2908724672 | Deep neural acoustic models benefit from context-dependent (CD) modeling of output symbols. We consider direct training of CTC networks with CD outputs, and identify two issues. The first one is frame-level normalization of probabilities in CTC, which induces strong language modeling behavior that leads to overfitting and interference with external language models. The second one is poor generalization in the presence of numerous lexical units like triphones or tri-chars. We mitigate the former with utterance-level normalization of probabilities. The latter typically requires reducing the CD symbol inventory with state-tying decision trees, which have to be transferred from classical GMM-HMM systems. We replace the trees with a CD symbol embedding network, which saves parameters and ensures generalization to unseen and undersampled CD symbols. The embedding network is trained together with the rest of the acoustic model and removes one of the last cases in which neural systems have to be bootstrapped from GMM-HMM ones. | Several authors have proposed to use multicharacter, or multiphoneme output tokens to reduce the number of emissions. The DeepSpeech 2 model used non-overlappling character bigrams @cite_16 , while @cite_15 and @cite_10 dynamically chose a decomposition of the output sequence. This idea was also explored in @cite_11 in the context of sequence-to-sequence models. We do not aim to reduce the length of target sequences, but to enable expressing of dependency of symbol emissions on their context. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2594856242",
"2949640717",
"2748163763",
"2952288254"
],
"abstract": [
"Most existing sequence labelling models rely on a fixed decomposition of a target sequence into a sequence of basic units. These methods suffer from two major drawbacks: 1) the set of basic units is fixed, such as the set of words, characters or phonemes in speech recognition, and 2) the decomposition of target sequences is fixed. These drawbacks usually result in sub-optimal performance of modeling sequences. In this pa- per, we extend the popular CTC loss criterion to alleviate these limitations, and propose a new loss function called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically learns the best set of basic units (grams), as well as the most suitable decomposition of tar- get sequences. Unlike CTC, Gram-CTC allows the model to output variable number of characters at each time step, which enables the model to capture longer term dependency and improves the computational efficiency. We demonstrate that the proposed Gram-CTC improves CTC in terms of both performance and efficiency on the large vocabulary speech recognition task at multiple scales of data, and that with Gram-CTC we can outperform the state-of-the-art on a standard speech benchmark.",
"We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.",
"",
"We present the Latent Sequence Decompositions (LSD) framework. LSD decomposes sequences with variable lengthed output units as a function of both the input sequence and the output sequence. We present a training algorithm which samples valid extensions and an approximate decoding algorithm. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9 WER compared to a character baseline of 14.8 WER. When combined with a convolutional network on the encoder, we achieve 9.6 WER."
]
} |
1901.04379 | 2908724672 | Deep neural acoustic models benefit from context-dependent (CD) modeling of output symbols. We consider direct training of CTC networks with CD outputs, and identify two issues. The first one is frame-level normalization of probabilities in CTC, which induces strong language modeling behavior that leads to overfitting and interference with external language models. The second one is poor generalization in the presence of numerous lexical units like triphones or tri-chars. We mitigate the former with utterance-level normalization of probabilities. The latter typically requires reducing the CD symbol inventory with state-tying decision trees, which have to be transferred from classical GMM-HMM systems. We replace the trees with a CD symbol embedding network, which saves parameters and ensures generalization to unseen and undersampled CD symbols. The embedding network is trained together with the rest of the acoustic model and removes one of the last cases in which neural systems have to be bootstrapped from GMM-HMM ones. | Perhaps most similar to our work is @cite_0 in which a neural network is trained to replace a state-tying decision tree. However, this implies a multistage training procedure in which both classical GMM-HMM and neural systems are built. In contrast, we strive to keep a simple, one stage training procedure that can be started form scratch. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2766263127"
],
"abstract": [
"In this paper, we present a novel Deep Triphone Embedding (DTE) representation derived from Deep Neural Network (DNN) to encapsulate the discriminative information present in the adjoining speech frames. DTEs are generated using a four hidden layer DNN with 3000 nodes in each hidden layer at the first-stage. This DNN is trained with the tied-triphone classification accuracy as an optimization criterion. Thereafter, we retain the activation vectors (3000) of the last hidden layer, for each speech MFCC frame, and perform dimension reduction to further obtain a 300 dimensional representation, which we termed as DTE. DTEs along with MFCC features are fed into a second-stage four hidden layer DNN, which is subsequently trained for the task of tied-triphone classification. Both DNNs are trained using tri-phone labels generated from a tied-state triphone HMM-GMM system, by performing a forced-alignment between the transcriptions and MFCC feature frames. We conduct the experiments on publicly available TED-LIUM speech corpus. The results show that the proposed DTE method provides an improvement of absolute 2.11 in phoneme recognition, when compared with a competitive hybrid tied-state triphone HMM-DNN system."
]
} |
1901.04210 | 2950381856 | Visual SLAM shows significant progress in recent years due to high attention from vision community but still, challenges remain for low-textured environments. Feature based visual SLAMs do not produce reliable camera and structure estimates due to insufficient features in a low-textured environment. Moreover, existing visual SLAMs produce partial reconstruction when the number of 3D-2D correspondences is insufficient for incremental camera estimation using bundle adjustment. This paper presents Edge SLAM, a feature based monocular visual SLAM which mitigates the above mentioned problems. Our proposed Edge SLAM pipeline detects edge points from images and tracks those using optical flow for point correspondence. We further refine these point correspondences using geometrical relationship among three views. Owing to our edge-point tracking, we use a robust method for two-view initialization for bundle adjustment. Our proposed SLAM also identifies the potential situations where estimating a new camera into the existing reconstruction is becoming unreliable and we adopt a novel method to estimate the new camera reliably using a local optimization technique. We present an extensive evaluation of our proposed SLAM pipeline with most popular open datasets and compare with the state-of-the art. Experimental result indicates that our Edge SLAM is robust and works reliably well for both textured and less-textured environment in comparison to existing state-of-the-art SLAMs. | Direct SLAM: Direct methods @cite_4 gains popularity for its semi dense map creation. Recently, Engel al present LSD SLAM, @cite_7 a direct SLAM pipeline that maintains a semi-dense map and minimizes the photometric error of pixels between images. LSD SLAM @cite_7 initializes the depth of pixels with a random value of high uncertainty by using inverse depth parametrization @cite_13 and optimize the depth based on disparity computed on image pixel. Many times this optimization does not converge with true depth for its noisy initialization and also due to noise present in the computation of photometric errors. Therefore direct methods yield erroneous camera estimation (see table. ). | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_7"
],
"mid": [
"2118428504",
"2108134361",
"1992763080"
],
"abstract": [
"We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.",
"DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.",
"An inherent property of objects in the world is that they only exist as meaningful entities over certain ranges of scale. If one aims at describing the structure of unknown real-world signals, then ..."
]
} |
1901.04210 | 2950381856 | Visual SLAM shows significant progress in recent years due to high attention from vision community but still, challenges remain for low-textured environments. Feature based visual SLAMs do not produce reliable camera and structure estimates due to insufficient features in a low-textured environment. Moreover, existing visual SLAMs produce partial reconstruction when the number of 3D-2D correspondences is insufficient for incremental camera estimation using bundle adjustment. This paper presents Edge SLAM, a feature based monocular visual SLAM which mitigates the above mentioned problems. Our proposed Edge SLAM pipeline detects edge points from images and tracks those using optical flow for point correspondence. We further refine these point correspondences using geometrical relationship among three views. Owing to our edge-point tracking, we use a robust method for two-view initialization for bundle adjustment. Our proposed SLAM also identifies the potential situations where estimating a new camera into the existing reconstruction is becoming unreliable and we adopt a novel method to estimate the new camera reliably using a local optimization technique. We present an extensive evaluation of our proposed SLAM pipeline with most popular open datasets and compare with the state-of-the art. Experimental result indicates that our Edge SLAM is robust and works reliably well for both textured and less-textured environment in comparison to existing state-of-the-art SLAMs. | Edge based Visual Odometry: Tarrio and Pedre @cite_14 present an edge-based visual odometry pipeline that uses edges as a feature for depth estimation. But camera estimation is erroneous because odometry works only on pairwise consistency, global consistency checking is very important for accurate camera estimation in a long trajectory. Yang and Scherer @cite_15 present a direct odometry based pipeline using points and lines where the estimated camera poses are comparable with ORB SLAM @cite_5 for textured environments but the pipeline does not consider the loop-closing which is an integral part of SLAM. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_14"
],
"mid": [
"1612997784",
"2601655733",
"2215578878"
],
"abstract": [
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in texture-less environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging texture-less environments, our algorithm reduces the state estimation error over 50 .",
"In this work we present a novel algorithm for realtime visual odometry for a monocular camera. The main idea is to develop an approach between classical feature-based visual odometry systems and modern direct dense semi-dense methods, trying to benefit from the best attributes of both. Similar to feature-based systems, we extract information from the images, instead of working with raw image intensities as direct methods. In particular, the information extracted are the edges present in the image, while the rest of the algorithm is designed to take advantage of the structural information provided when pixels are treated as edges. Edge extraction is an efficient and higly parallelizable operation. The edge depth information extracted is dense enough to allow acceptable surface fitting, similar to modern semi-dense methods. This is a valuable attribute that feature-based odometry lacks. Experimental results show that the proposed method has similar drift than state of the art feature-based and direct methods, and is a simple algorithm that runs at realtime and can be parallelized. Finally, we have also developed an inertial aided version that successfully stabilizes an unmanned air vehicle in complex indoor environments using only a frontal camera, while running the complete solution in the embedded hardware on board the vehicle."
]
} |
1901.04248 | 2898695505 | The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras' performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic "blink" detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations. | The Near-Earth Asteroid Tracking (NEAT) program was among the first to implement a fully automated system for detecting asteroids, including controlling the telescope, acquiring wide-field images, and detecting NEOs using on-site computing power provided by a Sun Sparc 20 and a Sun Enterprise 450 computer @cite_10 . | {
"cite_N": [
"@cite_10"
],
"mid": [
"1991774714"
],
"abstract": [
"The Near-Earth Asteroid Tracking (NEAT) system operates autonomously at the Maui Space Surveillance Site on the summit of the extinct Haleakala Volcano Crater, Hawaii. The program began in 1995 December and continues with an observing run every month. Its astrometric observations result in discoveries of near-Earth objects (NEOs), both asteroids (NEAs) and comets, and other unusual minor planets. Each six-night run NEAT covers about 10 of the accessible sky, detects thousands of asteroids, and detects two to five NEAs. NEAT has also contributed more than 1500 preliminary designations of minor planets and 26,000 detections of main-belt asteroids. This paper presents a description of the NEAT system and discusses its capabilities, including sky coverage, limiting magnitude, and detection efficiency. NEAT is an effective discoverer of NEAs larger than 1 km and is a major contributor to NASA's goal of identifying all NEAs of this size. An expansion of NEAT into a network of three similar systems would be capable of discovering 90 of the 1 km and larger NEAs within the next 10–40 yr, while serving the additional role of satellite detection and tracking for the US Air Force. Daily updates of NEAT results during operational periods can be found at JPL's Web site (http: huey.jpl.nasa.gov spravdo neat.html). The images and information about the detected objects, including times of observation, positions, and magnitudes are made available via NASA's SkyMorph program."
]
} |
1901.04248 | 2898695505 | The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras' performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic "blink" detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations. | @cite_3 developed a highly automated moving object detection software package. Their approach maintains high efficiency while producing low false-detection rates by using two independent detection algorithms and combining the results, reducing the operator time associated with searching huge data sets. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2050563626"
],
"abstract": [
"With the deployment of large CCD mosaic cameras and their use in large-scale surveys to discover Solar system objects, there is a need for fast detection algorithms that can handle large data loads in a nearly automatic way. We present here an algorithm that we have developed. Our approach, by using two independent detection algorithms and combining the results, maintains high efficiency while producing low false-detection rates. These properties are crucial in order to reduce the operator time associated with searching these huge data sets. We have used this algorithm on two different mosaic data sets obtained using the CFH12K camera at the Canada–France–Hawaii Telescope (CFHT). Comparing the detection efficiency and false-detection rate of each individual algorithm with the combination of both, we show that our approach decreases the false detection rate by a factor of a few hundred to a thousand, while decreasing the ‘limiting magnitude’ (where the detection rate drops to 50 per cent) by only 0.1–0.3 mag. The limiting magnitude is similar to that of a human operator blinking the images. Our full pipeline also characterizes the magnitude efficiency of the entire system by implanting artificial objects in the data set. The detection portion of the package is publicly available."
]
} |
1901.04248 | 2898695505 | The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras' performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic "blink" detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations. | The first automatic detection improved algorithm, that cleans all stars in all individual images before combining them, was developed by @cite_13 . The algorithm uses many CCD images in order to detect very dark moving objects that are invisible on a single CCD image. It was applied for the first time on a very small 35-cm telescope to discover few asteroids up to mag 21 upon combination of 40 individual images exposed each for 3 minutes. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2037538319"
],
"abstract": [
"We have devised an automatic detection algorithm for unresolved moving objects, such as asteroids and comets. The algorithm uses many CCD images in order to detect very dark moving objects that are invisible on a single CCD image. We carried out a trial observation to investigate its usefulness, using a 35-cm telescope. By using the algorithm, we succeeded to detect asteroids down to about 21mag. This algorithm will contribute significantly to searches for near-Earth objects and to solar-system astronomy."
]
} |
1901.04248 | 2898695505 | The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras' performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic "blink" detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations. | Pan-STARRS Moving Object Processing System (MOPS) @cite_0 is a modern software package that produces automatic asteroid discoveries and identifications from catalogs of transient detections from next-generation astronomical survey telescopes. MOPS achieves 99:5 Besides major surveys, automated asteroid detection systems were also created by amateurs and small private surveys such as the TOTAS survey @cite_1 carried out with the ESA-OGS 1m telescope, lead by ESA, or the work of @cite_2 . | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_2"
],
"mid": [
"2123353273",
"2251089998",
"197143460"
],
"abstract": [
"ABSTRACT.We describe the Pan-STARRS Moving Object Processing System (MOPS), a modern software package that produces automatic asteroid discoveries and identifications from catalogs of transient detections from next-generation astronomical survey telescopes. MOPS achieves >99.5 >99.5 efficiency in producing orbits from a synthetic but realistic population of asteroids whose measurements were simulated for a Pan-STARRS4-class telescope. Additionally, using a nonphysical grid population, we demonstrate that MOPS can detect populations of currently unknown objects such as interstellar asteroids. MOPS has been adapted successfully to the prototype Pan-STARRS1 telescope despite differences in expected false detection rates, fill-factor loss, and relatively sparse observing cadence compared to a hypothetical Pan-STARRS4 telescope and survey. MOPS remains highly efficient at detecting objects but drops to 80 efficiency at producing orbits. This loss is primarily due to configurable MOPS processing limits that a...",
"Abstract ESA's 1-m telescope on Tenerife, the Optical Ground Station (OGS), has been used for observing NEOs since 2009. Part of the observational activity is the demonstration and test of survey observation strategies. During the observations, a total of 11 near-Earth objects have been discovered in about 360 h of observing time from 2009 to 2014. The survey observations are performed by imaging the same area in the sky 3 or 4 times within a 15–20 min time interval. A software robot analyses the images, searching for moving objects. The survey strategies and related data processing algorithms are described in this paper.",
"In this work we present a system for autonomous discovery of asteroids, space trash and other moving objects. This system performs astronomical image data reduction based on an image processing pipeline. The processing steps of the pipeline include astrometric and photometric reduction, sequence alignment, moving object detection and astronomical analysis, making the system capable of discovering and monitoring previously unknown moving objects in the night sky."
]
} |
1901.04248 | 2898695505 | The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras' performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic "blink" detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations. | Cop ^ a @cite_6 propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries. The current version of the proposed pipeline prototype is tightly coupled with the data obtained from the 2.5 meters diameter Isaac Newton Telescope (INT) located in La Palma, Canary Islands, Spain and represents the basis on which the NEARBY platform is built upon. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2770825526"
],
"abstract": [
"Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries."
]
} |
1901.04359 | 2908905630 | Distributed synchronous stochastic gradient descent (S-SGD) has been widely used in training large-scale deep neural networks (DNNs), but it typically requires very high communication bandwidth between computational workers (e.g., GPUs) to exchange gradients iteratively. Recently, Top- @math sparsification techniques have been proposed to reduce the volume of data to be exchanged among workers. Top- @math sparsification can zero-out a significant portion of gradients without impacting the model convergence. However, the sparse gradients should be transferred with their irregular indices, which makes the sparse gradients aggregation difficult. Current methods that use AllGather to accumulate the sparse gradients have a communication complexity of @math , where @math is the number of workers, which is inefficient on low bandwidth networks with a large number of workers. We observe that not all top- @math gradients from @math workers are needed for the model update, and therefore we propose a novel global Top- @math (gTop- @math ) sparsification mechanism to address the problem. Specifically, we choose global top- @math largest absolute values of gradients from @math workers, instead of accumulating all local top- @math gradients to update the model in each iteration. The gradient aggregation method based on gTop- @math sparsification reduces the communication complexity from @math to @math . Through extensive experiments on different DNNs, we verify that gTop- @math S-SGD has nearly consistent convergence performance with S-SGD, and it has only slight degradations on generalization performance. In terms of scaling efficiency, we evaluate gTop- @math on a cluster with 32 GPU machines which are interconnected with 1 Gbps Ethernet. The experimental results show that our method achieves @math higher scaling efficiency than S-SGD and @math improvement than the existing Top- @math S-SGD. | In terms of gradient sparsification, which zero-outs a large proportion of gradients to reduce the communication size, @cite_17 and @cite_30 empirically demonstrate that up to $99 Focusing on the study of sparsification, the other two work in @cite_16 and @cite_6 are mostly related to our work. They have realized that efficient sparse AllReduce algorithms are non-trivial to implement, and they both propose the AllGather solution. However, the AllGather method requires a linear increase cost with respect to the number of workers. Therefore, the AllGather could be inefficient when scaling to large-scale clusters. | {
"cite_N": [
"@cite_30",
"@cite_16",
"@cite_6",
"@cite_17"
],
"mid": [
"2775776326",
"2789218400",
"",
"2606891064"
],
"abstract": [
"Highly distributed training of Deep Neural Networks (DNNs) on future compute platforms (offering 100 of TeraOps s of computational capacity) is expected to be severely communication constrained. To overcome this limitation, new gradient compression techniques are needed that are computationally friendly, applicable to a wide variety of layers seen in Deep Neural Networks and adaptable to variations in network architectures as well as their hyper-parameters. In this paper we introduce a novel technique - the Adaptive Residual Gradient Compression (AdaComp) scheme. AdaComp is based on localized selection of gradient residues and automatically tunes the compression rate depending on local activity. We show excellent results on a wide spectrum of state of the art Deep Learning models in multiple domains (vision, speech, language), datasets (MNIST, CIFAR10, ImageNet, BN50, Shakespeare), optimizers (SGD with momentum, Adam) and network parameters (number of learners, minibatch-size etc.). Exploiting both sparsity and quantization, we demonstrate end-to-end compression rates of 200X for fully-connected and recurrent layers, and 40X for convolutional layers, without any noticeable degradation in model accuracies.",
"One of the main drivers behind the rapid recent advances in machine learning has been the availability of efficient system support. This comes both through hardware acceleration, but also in the form of efficient software frameworks and programming models. Despite significant progress, scaling compute-intensive machine learning workloads to a large number of compute nodes is still a challenging task, with significant latency and bandwidth demands. In this paper, we address this challenge, by proposing SPARCML, a general, scalable communication layer for machine learning applications. SPARCML is built on the observation that many distributed machine learning algorithms either have naturally sparse communication patters, or have updates which can be sparsified in a structured way for improved performance, without any convergence or accuracy loss. To exploit this insight, we design and implement a set of communication efficient protocols for sparse input data, in conjunction with efficient machine learning algorithms which can leverage these primitives. Our communication protocols generalize standard collective operations, by allowing processes to contribute sparse input data vectors, of heterogeneous sizes. We call these operations sparse-input collectives, and present efficient practical algorithms with strong theoretical bounds on their running time and communication cost. Our generic communication layer is enriched with additional features, such support for non-blocking (asynchronous) operations, and support for low-precision data representations. We validate our algorithmic results experimentally on a range of large-scale machine learning applications and target architectures, showing that we can leverage sparsity for order- of-magnitude runtime savings, compared to state-of-the art methods and frameworks.",
"",
"We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99 smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and apply them to neural machine translation and MNIST image classification tasks. Most configurations work on MNIST, whereas different configurations reduce convergence rate on the more complex translation task. Our experiments show that we can achieve up to 49 speed up on MNIST and 22 on NMT without damaging the final accuracy or BLEU."
]
} |
1901.04364 | 2910001187 | In critical care, intensivists are required to continuously monitor high dimensional vital signs and lab measurements to detect and diagnose acute patient conditions. This has always been a challenging task. In this study, we propose a novel self-correcting deep learning prediction approach to address this challenge. We focus on an example of the prediction of acute kidney injury (AKI). Compared with the existing models, our method has a number of distinct features: we utilized the accumulative data of patients in ICU; we developed a self-correcting mechanism that feeds errors from the previous predictions back into the network; we also proposed a regularization method that takes into account not only the model's prediction error on the label but also its estimation errors on the input data. This mechanism is applied in both regression and classification tasks. We compared the performance of our proposed method with the conventional deep learning models on two real-world clinical datasets and demonstrated that our proposed model constantly outperforms these baseline models. In particular, the proposed model achieved area under ROC curve at 0.893 on the MIMIC III dataset, and 0.871 on the Philips eICU dataset. | AKI prediction with utilization of features from EHR data is attracting a widespread research interest @cite_11 @cite_8 . In particular, much research in recent years has focused on predictive modeling on a broad population to identify high-risk subjects as early as possible @cite_8 . Initially, AKI prediction was modeled by standard statistical modeling methods, including logistic regression, discriminant analysis, or decision tree algorithm @cite_1 @cite_16 @cite_5 @cite_18 . Data were accumulated using sliding window method, and the prediction was generated at a specified interval (per hour, two hours, day, shift etc). Alternatively, some models @cite_11 could generate a risk score in real time when a new data point was received as well. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_1",
"@cite_5",
"@cite_16",
"@cite_11"
],
"mid": [
"2781246281",
"2775789188",
"2141413316",
"2067825615",
"",
"2286158066"
],
"abstract": [
"Background:A major problem in treating acute kidney injury (AKI) is that clinical criteria for recognition are markers of established kidney damage or impaired function; treatment before such damag...",
"Acute Kidney Injury (AKI), the abrupt decline in kidney function due to temporary or permanent injury, is associated with increased mortality, morbidity, length of stay, and hospital cost. Sometimes, simple interventions such as medication review or hydration can prevent AKI. There is therefore interest in estimating risk of AKI at hospitalization. To gain insight into this task, we employ multilayer perceptron (MLP) and recurrent neural networks (RNNs) using serum creatinine (sCr) as a lone feature. We explore different feature input structures, including variable-length look-backs and a nested formulation for rehospitalized patients with previous sCr measurements. Experimental results show that the simplest model, MLP processing the sum of sCr, had best performance: AUROC 0.92 and AUPRC 0.70. Such a simple model could be easily integrated into an EHR. Preliminary results also suggest that inpatient data streams with missing outpatient measurements---common in the medical setting---might be best modeled with a tailored architecture.",
"The risk of mortality associated with acute renal failure (ARF) after open-heart surgery continues to be distressingly high. Accurate prediction of ARF provides an opportunity to develop strategies for early diagnosis and treatment. The aim of this study was to develop a clinical score to predict postoperative ARF by incorporating the effect of all of its major risk factors. A total of 33,217 patients underwent open-heart surgery at the Cleveland Clinic Foundation (1993 to 2002). The primary outcome was ARF that required dialysis. The scoring model was developed in a randomly selected test set ( n = 15,838) and was validated on the remaining patients. Its predictive accuracy was compared by area under the receiver operating characteristic curve. The score ranges between 0 and 17 points. The ARF frequency at each score level in the validation set fell within the 95 confidence intervals (CI) of the corresponding frequency in the test set. Four risk categories of increasing severity (scores 0 to 2, 3 to 5, 6 to 8, and 9 to 13) were formed arbitrarily. The frequency of ARF across these categories in the test set ranged between 0.5 and 22.1 . The score was also valid in predicting ARF across all risk categories. The area under the receiver operating characteristic curve for the score in the test set was 0.81 (95 CI 0.78 to 0.83) and was similar to that in the validation set (0.82; 95 CI 0.80 to 0.85; P = 0.39). In conclusion, a score is valid and accurate in predicting ARF after open-heart surgery; along with increasing its clinical utility, the score can help in planning future clinical trials of ARF.",
"Background— Renal insufficiency after coronary artery bypass graft (CABG) surgery is associated with increased short-term and long-term mortality. We hypothesized that preoperative patient characteristics could be used to predict the patient-specific risk of developing postoperative renal insufficiency. @PARASPLIT Methods and Results— Data were prospectively collected on 11 301 patients in northern New England who underwent isolated CABG surgery between 2001 and 2005. Based on National Kidney Foundation definitions, moderate renal insufficiency was defined as a GFR 12 000, prior CABG, congestive heart failure, peripheral vascular disease, diabetes, hypertension, and preoperative intraaortic balloon pump. The predictive model was significant with χ2 150.8, probability value <0.0001. The model discriminated well, ROC 0.72 (95 CI: 0.68 to 0.75). The model was well calibrated according to the Hosmer-Lemeshow test. @PARASPLIT Conclusions— We developed a robust prediction rule to assist clinicians in identifying patients with normal, or near normal, preoperative renal function who are at high risk of developing severe renal insufficiency. Physicians may be able to take steps to limit this adverse outcome and its associated increase in morbidity and mortality.",
"",
"The data contained within the electronic health record (EHR) is “big” from the standpoint of volume, velocity, and variety. These circumstances and the pervasive trend towards EHR adoption have sparked interest in applying big data predictive analytic techniques to EHR data. Acute kidney injury (AKI) is a condition well suited to prediction and risk forecasting; not only does the consensus definition for AKI allow temporal anchoring of events, but no treatments exist once AKI develops, underscoring the importance of early identification and prevention. The Acute Dialysis Quality Initiative (ADQI) convened a group of key opinion leaders and stakeholders to consider how best to approach AKI research and care in the “Big Data” era. This manuscript addresses the core elements of AKI risk prediction and outlines potential pathways and processes. We describe AKI prediction targets, feature selection, model development, and data display."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | The fact that academic research has only limited impact on practice is a fact that has been long discussed in academia. For example, Briand @cite_28 shared his 20 year experience of collaborating with around 30 different companies and public institutions. Being rewarded for number of publications, in contrast to other engineering fields that put more focus on patents and industry collaborations, is one of the causes. This is also related to the fact that software engineering departments are often part of mathematics or computer science, and not engineering. This is quite bizarre: Just imagine mechanical or civil engineering being part of a physics department'' @cite_28 . | {
"cite_N": [
"@cite_28"
],
"mid": [
"2088697603"
],
"abstract": [
"The author provides, based on 20 years of research and industrial experience, his assessment of software engineering research. He then builds on such analysis to provide recommendations on how we need to change as a research community to increase our impact, gain credibility, and ultimately ensure the success and recognition of our young discipline. The gist of the author's message is that we need to become a true engineering discipline."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | The problem of science vs. engineering does also have impact on teaching and education. As Offutt stated: Isn't it just a little strange that we prepare software engineers by teaching them computer science?'' @cite_37 . Practical software engineering is different from science, and needs different teaching methods. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2125291571"
],
"abstract": [
"Based on over 20 years of teaching and research experience, the author provides his assessment of software engineering education. He then builds on the analysis to provide recommendations on how we need to diverge from computer science to increase our impact, gain credibility, and ultimately ensure the success and recognition of our young discipline. A key behind the author's message is that we need to become a true engineering discipline."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | One way to improve the state of software engineering research is to have close collaborations with industrial partners. In this regard, @cite_17 performed a systematic literature review on the topic of industry-academia collaborations, collecting and discussing 33 articles published between 1995 and 2014. On the other hand, in @cite_33 discussed their personal experience of industry collaborations they had both in Canada and Turkey. Furthermore, @cite_36 also conducted a survey among practitioners regarding which testing topics they want the research community to work on. | {
"cite_N": [
"@cite_36",
"@cite_33",
"@cite_17"
],
"mid": [
"2622213173",
"2466590866",
"2494902551"
],
"abstract": [
"The level of industry-academia collaboration (IAC) in software engineering in general and in software testing in particular is quite low. Many researchers and practitioners are not collaborating with the \"other side\" to solve industrial problems. To shed light on the above issue and to characterize precisely what industry wants from academia in software testing, we solicited practitioners' opinions on their challenges in different testing activities and also the particularly relevant topics that they want the research community to work on. This short paper aims to draw the community's attention to the important issue of strengthening IAC with the hope of more IAC in software testing in the areas of most importance to the industry.",
"Collaboration between industry and academia supports improvement and innovation in industry and helps to ensure industrial relevance in academic research. However, many researchers and practitioners believe that the level of joint industry–academia collaborations (IAC) in software engineering (SE) is still relatively very low, compared to the amount of activity in each of the two communities. The goal of the empirical study reported in this paper is to characterize a set of collaborative industry–academia R&D projects in the area of software testing conducted by the authors (based in Canada and Turkey) with respect to a set of challenges, patterns and anti-patterns identified by a recent Systematic Literature Review study, with the aim of contributing to the body of evidence in the area of IAC, for the benefit of SE researchers and practitioners in conducting successful IAC projects in software testing and in software engineering in general. To address the above goal, a pool of ten IAC projects (six completed, two failed and two ongoing) all in the area of software testing, which the authors have led or have had active roles in, were selected as objects of study and were analyzed (both quantitatively and qualitatively) with respect to the set of selected challenges, patterns and anti-patterns. As outputs, the study presents a set of empirical findings and evidence-based recommendations, e.g.: it has been observed that even if an IAC project may seem perfect from many aspects, one single major challenge (e.g., disagreement in confidentiality agreements) can lead to its failure. Thus, we recommend that both parties (academics and practitioners) consider all the challenges early on and proactively work together to eliminate the risk of challenges in IAC projects. We furthermore report correlation and interrelationship of challenges, patterns and anti-patterns with project success measures. This study hopes to encourage and benefit other SE researchers and practitioners in conducting successful IAC projects in software testing and in software engineering in general in the future.",
"Context: The global software industry and the software engineering (SE) academia are two large communities. However, unfortunately, the level of joint industry-academia collaborations in SE is still relatively very low, compared to the amount of activity in each of the two communities. It seems that the two 'camps' show only limited interest motivation to collaborate with one other. Many researchers and practitioners have written about the challenges, success patterns (what to do, i.e., how to collaborate) and anti-patterns (what not do do) for industry-academia collaborations.Objective: To identify (a) the challenges to avoid risks to the collaboration by being aware of the challenges, (b) the best practices to provide an inventory of practices (patterns) allowing for an informed choice of practices to use when planning and conducting collaborative projects.Method: A systematic review has been conducted. Synthesis has been done using grounded-theory based coding procedures.Results: Through thematic analysis we identified 10 challenge themes and 17 best practice themes. A key outcome was the inventory of best practices, the most common ones recommended in different contexts were to hold regular workshops and seminars with industry, assure continuous learning from industry and academic sides, ensure management engagement, the need for a champion, basing research on real-world problems, showing explicit benefits to the industry partner, be agile during the collaboration, and the co-location of the researcher on the industry side.Conclusion: Given the importance of industry-academia collaboration to conduct research of high practical relevance we provide a synthesis of challenges and best practices, which can be used by researchers and practitioners to make informed decisions on how to structure their collaborations."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | More recently, @cite_13 made a survey about industry-academia collaborations in computer science and software engineering, involving 60 academics and 66 people in industry. One conclusion was: There is a lack of communication and understanding between academia and industry ( @math ) there is a lot of mistrust of academics among those in industry ( @math ) Both sides, however, seem to be open to collaboration and would love to see stronger relationships''. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2462111230"
],
"abstract": [
"IT-driven innovation is an enormous factor in the worldwide economic leadership of the United States. It is larger than finance, construction, or transportation, and it employs nearly 6 of the US workforce. The top three companies, as measured by market capitalization, are IT companies - Apple, Google (now Alphabet), and Microsoft. Facebook, a relatively recent entry in the top 10 list by market capitalization has surpassed Walmart, the nation's largest retailer, and the largest employer in the world. The net income of just the top three exceeds $80 billion - roughly 100 times the total budget of the NSF CISE directorate which funds 87 of computing research. In short, the direct return on federal research investments in IT research has been enormously profitable to the nation. The IT industry ecosystem is also evolving. The time from conception to market of successful products has been cut from years to months. Product life cycles are increasingly a year or less. This change has pressured companies to focus industrial R&D on a pipeline or portfolio of technologies that bring immediate, or almost immediate, value to the companies. To defeat the competition and stay ahead of the pack, a company must devote resources to realizing gains that are shorter term, and must remain agile to respond quickly to market changes driven by new technologies, new startups, evolving user experience expectations, and the continuous consumer demand for new and exciting products. Amidst this landscape, the Computing Community Consortium convened a round-table of industry and academic participants to better understand the landscape of industry-academic interaction, and to discuss possible actions that might be taken to enhance those interactions. We close with some recommendations for actions that could expand the lively conversation we experienced at the round-table to a national scale."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | The discrepancy between industry and academia is also clear when looking at what topics are discussed at practitioner conferences compared to the academic ones @cite_32 . For example, in the context of testing, practitioners are more interested on mobile and agile testing, whereas academics seems to focus more on model-based and combinatorial testing, which are topics seldom discussed at practitioner conferences @cite_32 . | {
"cite_N": [
"@cite_32"
],
"mid": [
"2760376440"
],
"abstract": [
"To determine how industry and academia approach software testing, researchers compared the titles of presentations from selected conferences in each of the two communities. The results shed light on the root cause of low industry–academia collaboration and led to suggestions on how to improve this situation."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | In the past, there were attempts from SIGSOFT (the Impact Project https: www.sigsoft.org impact.html ) to keep track of and promote academic impact on software engineering practice @cite_5 . Different success stories were discussed, where ideas'' investigated in academia were then considered and adopted in industry. However, it was also estimated that such transfer of ideas takes roughly 15-20 years. Unfortunately, such very valuable initiative from SIGSOFT seems has been abandoned for many years (since 2008) because the project was a volunteer effort, supported only by some very modest funding for travel to project meetings. Eventually the participants slowly but surely felt the stronger pull of their individual research endeavors and we suspended our activities''. Private communication with one of the Impact Project organizers. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2109386870"
],
"abstract": [
"The impact project provides a solid and scholarly assessment of the impact software engineering research has had on software engineering practice. The assessment takes the form of a series of studies and briefings, each involving literature searches and, where possible, personal interviews."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | As of time of writing, one of the most famous success stories is @cite_1 : a company making static dynamic analysis tools that was founded at Stanford University, and then sold for more than $300 millions. http: www.coverity.com press-releases synopsys-completes-coverity-acquisition | {
"cite_N": [
"@cite_1"
],
"mid": [
"2008626182"
],
"abstract": [
"How Coverity built a bug-finding tool, and a business, around the unlimited supply of bugs in software systems."
]
} |
1901.03865 | 2768363122 | What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are “neglected” in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. | The importance of industry-academia collaborations and the aim of achieving usable results for practitioners are not something that is specific only for software engineering, but they are also significant for many other fields such as Computer-Human Interaction @cite_18 , Data Mining @cite_27 and even medicine @cite_34 @cite_10 @cite_11 . | {
"cite_N": [
"@cite_18",
"@cite_27",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"175386426",
"1536944383",
"2000522005",
"2108655250",
"2108120825"
],
"abstract": [
"",
"Data mining (DM) research has successfully developed advanced DM techniques and algorithms over the last few decades, and many organisations have great expectations to take more benefit of their data warehouses in decision making. Currently, the strong focus of most DM-researchers is still only on technology-oriented topics. Commonly the DM research has several stakeholders, the major of which can be divided into internal and external ones each having their own point of view, and which are at least partly conflicting. The most important internal groups of stakeholders are the DM research community and academics in other disciplines. The most important external stakeholder groups are managers and domain experts who have their own utility-based interests to DM and DM research results. In this paper we discuss these practice-oriented points of view towards DM research and suggest broader discussions inside the DM research community about who should do that kind of research. We bring in the discussion several topics developed in the information systems (IS) discipline and show some similarities between IS and DM systems. DM systems have also their own peculiarities and we conclude that researchers who take into account human and organisational aspects related to DM systems need to have also some understanding about DM. This makes us suggest that the research area inside the DM community should be made broader than the current heavily technology-oriented one.",
"Introduction: Although concerns have been raised about industry support of continuing medical education (CME), there are few published reports of academia-industry collaboration in the field. We describe and evaluate Pri-Med, a CME experience for primary care clinicians developed jointly by the Harvard Medical School (HMS) and M C Communications. Methods: Since 1995, 19 Pri-Med conferences have been held in four cities, drawing more than 100,000 primary care clinicians. The educational core of each Pri-Med conference is a 3-day Harvard course, “Current Clinical Issues in Primary Care.” Course content is determined by a faculty committee independent of any commercial influence. Revenues from multiple industry sources flow through M C Communications to the medical school as an educational grant to support primary care education, Pri-Med also offers separate pharmaceutical company–funded symposia. Results: Comparing the two educational approaches during four conferences, 221 HMS talks and 103 symposia were presented. The HMS course covered a wide range with 133 topics; the symposia focused on 30 topics, most of which were linked to recently approved new therapeutic products manufactured by the funders. Both the course and the symposia were highly rated by attendees. Discussion: When CME presentations for primary care physicians receive direct support from industry, the range of offered topics is narrower than when programs are developed independently of such support. There appear to be no differences in the perceived quality of presentations delivered with and without such support. Our experience suggests that a firewall between program planners and providers of financial support will result in a broader array of educational subjects relevant to the field of primary care.",
"Aim: To identify which factors are important barriers to effective collaboration between Japanese academia and industry in the field of regenerative medicine. Methods: In November–December 2006, in-person semistructured interviews were conducted with representatives from nine Japanese companies that are engaged in developing regenerative medicine products in collaboration with academia and two academic scientists with the successful collaborative experiences with companies. Results & conclusions: The major barriers to collaboration relate to the inadequacy of particular systems in academic institutions (particularly technology licensing organizations and mobility between industry and academia), the knowledge deficit of academic personnel with respect to industry, the inadequacy of particular governmental support systems and the Japanese public's view of these collaborations, which has resulted in overly strict conflict of interest guidelines. We suggest some approaches to overcome these barriers.",
"The development and commercialization of diagnostic assays is distinct from that of therapeutic drugs in many important respects; for example, there are more variable regulatory requirements and reduced outside investments for diagnostics. The diagnostics industry has a pro-collaborative stance, because there is considerable mutual benefit in working in partnership with university or government researchers. However, there are substantial barriers to industry-academic collaborations. A Clinical and Translational Science Awards Industry Forum titled “Promoting Efficient and Effective Collaborations Among Academia, Government, and Industry” was held in February 2010, and a session at this forum was organized to list some of the most important barriers to diagnostics development and to discuss some possible solutions."
]
} |
1901.04095 | 2910109410 | Network embedding aims to learn a latent, low-dimensional vector representations of network nodes, effective in supporting various network analytic tasks. While prior arts on network embedding focus primarily on preserving network topology structure to learn node representations, recently proposed attributed network embedding algorithms attempt to integrate rich node content information with network topological structure for enhancing the quality of network embedding. In reality, networks often have sparse content, incomplete node attributes, as well as the discrepancy between node attribute feature space and network structure space, which severely deteriorates the performance of existing methods. In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space. The resultant latent subspace can respect network structure in a more consistent way towards learning high-quality node representations. We formulate an optimization problem which is solved by an efficient stochastic gradient descent algorithm, with linear time complexity to the number of nodes. We investigate a series of linear and non-linear transformations performed on node attributes and empirically validate their effectiveness on various types of networks. Another advantage of attri2vec is its ability to solve out-of-sample problems, where embeddings of new coming nodes can be inferred from their node attributes through the learned mapping function. Experiments on various types of networks confirm that attri2vec is superior to state-of-the-art baselines for node classification, node clustering, as well as out-of-sample link prediction tasks. The source code of this paper is available at this https URL. | Existing research work on network embedding can be roughly divided into two categories @cite_25 : structure preserving network embedding that leverages network structure only, and attributed network embedding that couples network structure with node attributes to improve network embedding. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2963512530"
],
"abstract": [
"With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines. In reality, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source projects. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study."
]
} |
1901.03900 | 2909951054 | We propose a genetic algorithm (GA) for hyperparameter optimization of artificial neural networks which includes chromosomal crossover as well as a decoupling of parameters (i.e., weights and biases) from hyperparameters (e.g., learning rate, weight decay, and dropout) during sexual reproduction. Children are produced from three parents; two contributing hyperparameters and one contributing the parameters. Our version of population-based training (PBT) combines traditional gradient-based approaches such as stochastic gradient descent (SGD) with our GA to optimize both parameters and hyperparameters across SGD epochs. Our improvements over traditional PBT provide an increased speed of adaptation and a greater ability to shed deleterious genes from the population. Our methods improve final accuracy as well as time to fixed accuracy on a wide range of deep neural network architectures including convolutional neural networks, recurrent neural networks, dense neural networks, and capsule networks. | Previous work addressing hyperparameter optimization in a high-performance computing setting spans a wide range of complexities and efficacies, from simple approaches like random search and grid search to more complicated approaches such as @cite_13 and (LTFB) @cite_26 . While some approaches, such as (MENNDL) @cite_23 @cite_0 , explicitly mention the use of genetic algorithms, others, such as (PBT) @cite_21 , are not expressly called out as such in the literature, even though they appear to be members of a generalized class of genetic algorithm. | {
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_0",
"@cite_23",
"@cite_13"
],
"mid": [
"2765985261",
"2770298516",
"2250904038",
"2563787691",
"2304209433"
],
"abstract": [
"We propose a new framework for parallelizing deep neural network training that maximize the amount of data that is ingested by the training algorithm. Our proposed framework called Livermore Tournament Fast Batch Learning (LTFB) targets large-scale data problems. The LTFB approach creates a set of Deep Neural Network (DNN) models and trains each instance of these models independently and in parallel. Periodically each model selects another model to pair with, exchanges models, and then run a local tournament against held-out tournament datasets. The winning model is will continue training on the local training datasets. This new approach maximizes computation and minimizes amount of synchronization required in training deep neural network, a major bottleneck in existing synchronous deep learning algorithms. We evaluate our proposed algorithm on two HPC machines at Lawrence Livermore National Laboratory including an early access IBM Power8+ with NVIDIA Tesla P100 GPUs machine. Experimental evaluations of the LTFB framework on two popular image classification benchmark: CIFAR10 [18] and ImageNet [19], show significant speed up compared to the sequential baseline.",
"Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.",
"There has been a recent surge of success in utilizing Deep Learning (DL) in imaging and speech applications for its relatively automatic feature generation and, in particular for convolutional neural networks (CNNs), high accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) through hyper-parameter choices remains a tedious and highly intuition driven task. To address this, Multi-node Evolutionary Neural Networks for Deep Learning (MENNDL) is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms.",
"Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power.In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation.The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.",
"Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. We introduce Hyperband for this framework and analyze its theoretical properties, providing several desirable guarantees. We compare Hyperband with state-of-the-art Bayesian optimization methods and a random search baseline on a comprehensive benchmark including 117 datasets. Our results on this benchmark demonstrate that while Bayesian optimization methods do not outperform random search trained for twice as long, Hyperband in favorable settings offers valuable speedups."
]
} |
1901.03900 | 2909951054 | We propose a genetic algorithm (GA) for hyperparameter optimization of artificial neural networks which includes chromosomal crossover as well as a decoupling of parameters (i.e., weights and biases) from hyperparameters (e.g., learning rate, weight decay, and dropout) during sexual reproduction. Children are produced from three parents; two contributing hyperparameters and one contributing the parameters. Our version of population-based training (PBT) combines traditional gradient-based approaches such as stochastic gradient descent (SGD) with our GA to optimize both parameters and hyperparameters across SGD epochs. Our improvements over traditional PBT provide an increased speed of adaptation and a greater ability to shed deleterious genes from the population. Our methods improve final accuracy as well as time to fixed accuracy on a wide range of deep neural network architectures including convolutional neural networks, recurrent neural networks, dense neural networks, and capsule networks. | This work first frames PBT as a genetic algorithm and then generalizes and extends it in Section . The original version of PBT can be thought of as a genetic algorithm with the following characteristics: overlapping generations, genes comprised of hyperparameters and parameters, GA generations running concurrently with SGD training epochs, asexual reproduction without crossover, two possibilities for viability selection (tournament and rank), and a fitness function customized for each of the NN objectives (e.g., using the BLUE score for neural machine translation @cite_22 ). Crossover (i.e., recombination), discussed in more detail in Section , combines hyperparameters from two parents to create a new child, as depicted in Figure . This provides not only a possibility for the aggregation of beneficial genes, but helps shed deleterious mutations from the population. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2101105183"
],
"abstract": [
"Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations."
]
} |
1901.03872 | 2908589198 | Direct design of a robot's rendered dynamics, such as in impedance control, is now a well-established control mode in uncertain environments. When the physical interaction port variables are not measured directly, dynamic and kinematic models are required to relate the measured variables to the interaction port variables. A typical example is serial manipulators with joint torque sensors, where the interaction occurs at the end-effector. As interactive robots perform increasingly complex tasks, they will be intermittently coupled with additional dynamic elements such as tools, grippers, or workpieces, some of which should be compensated and brought to the robot side of the interaction port, making the inverse dynamics multimodal. Furthermore, there may also be unavoidable and unmeasured external input when the desired system cannot be totally isolated. Towards semi-autonomous robots, capable of handling such applications, a multimodal Gaussian process regression approach to manipulator dynamic modelling is developed. A sampling-based approach clusters different dynamic modes from unlabelled data, also allowing the seperation of perturbed data with significant, irregular external input. The passivity of the overall approach is shown analytically, and experiments examine the performance and safety of this approach on a test actuator. | Adaptive control uses input and output data of the system to (in some variants) realize on-line identification of model parameters @cite_15 . However, adaptive techniques require a structured or parameterized model, making nonlinear friction and non-conventional dynamics difficult to compensate. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2069913713"
],
"abstract": [
"This article presents a general methodology for the design of adaptive control systems which can learn to operate efficiently in dynamical environments possessing a high degree of uncertainty. Multiple models are used to describe the different environments and the control is effected by switching to an appropriate controller followed by tuning or adaptation. The study of linear systems provides the theoretical foundation for the approach and is described first. The manner in which such concepts can be extended to the control of nonlinear systems using neural networks is considered next. Towards the end of the article, the applications of the above methodology to practical robotic manipulator control is described. >"
]
} |
1901.03872 | 2908589198 | Direct design of a robot's rendered dynamics, such as in impedance control, is now a well-established control mode in uncertain environments. When the physical interaction port variables are not measured directly, dynamic and kinematic models are required to relate the measured variables to the interaction port variables. A typical example is serial manipulators with joint torque sensors, where the interaction occurs at the end-effector. As interactive robots perform increasingly complex tasks, they will be intermittently coupled with additional dynamic elements such as tools, grippers, or workpieces, some of which should be compensated and brought to the robot side of the interaction port, making the inverse dynamics multimodal. Furthermore, there may also be unavoidable and unmeasured external input when the desired system cannot be totally isolated. Towards semi-autonomous robots, capable of handling such applications, a multimodal Gaussian process regression approach to manipulator dynamic modelling is developed. A sampling-based approach clusters different dynamic modes from unlabelled data, also allowing the seperation of perturbed data with significant, irregular external input. The passivity of the overall approach is shown analytically, and experiments examine the performance and safety of this approach on a test actuator. | Nonparametric modeling techniques have found application to inverse dynamics learning @cite_11 . These models can generalize from historical data to new trajectories, although the extent to which this is practically achieved is both application and parameter dependent. Being constructed from historical data, they can capture more difficult non-linear effects and don't require a priori knowledge of model structure. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2097815751"
],
"abstract": [
"Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. We previously proposed a new modular architecture, the modular selection and identification for control (MOSAIC) model, for motor learning and control based on multiple pairs of forward (predictor) and inverse (controller) models. The architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the set of inverse models appropriate for a given environment. It combines both feedforward and feedback sensorimotor information so that the controllers can be selected both prior to movement and subsequently during movement. This article extends and evaluates the MOSAIC architecture in the following respects. The learning in the architecture was implemented by both the original gradient-descent method and the expectation-maximization (EM) algorithm. Unlike gradient descent, the newly derived EM algorithm is robust to the initial starting conditions and learning parameters. Second, simulations of an object manipulation task prove that the architecture can learn to manipulate multiple objects and switch between them appropriately. Moreover, after learning, the model shows generalization to novel objects whose dynamics lie within the polyhedra of already learned dynamics. Finally, when each of the dynamics is associated with a particular object shape, the model is able to select the appropriate controller before movement execution. When presented with a novel shape-dynamic pairing, inappropriate activation of modules is observed followed by on-line correction."
]
} |
1901.03872 | 2908589198 | Direct design of a robot's rendered dynamics, such as in impedance control, is now a well-established control mode in uncertain environments. When the physical interaction port variables are not measured directly, dynamic and kinematic models are required to relate the measured variables to the interaction port variables. A typical example is serial manipulators with joint torque sensors, where the interaction occurs at the end-effector. As interactive robots perform increasingly complex tasks, they will be intermittently coupled with additional dynamic elements such as tools, grippers, or workpieces, some of which should be compensated and brought to the robot side of the interaction port, making the inverse dynamics multimodal. Furthermore, there may also be unavoidable and unmeasured external input when the desired system cannot be totally isolated. Towards semi-autonomous robots, capable of handling such applications, a multimodal Gaussian process regression approach to manipulator dynamic modelling is developed. A sampling-based approach clusters different dynamic modes from unlabelled data, also allowing the seperation of perturbed data with significant, irregular external input. The passivity of the overall approach is shown analytically, and experiments examine the performance and safety of this approach on a test actuator. | Several authors have investigated inverse dynamic modeling for changing payload mass @cite_21 @cite_2 , one of the most common ways interactive robots become multimodal, but here considerations of modeling and interactive performance are taken (modeling limitations, passivity, external input), and validation is in interactive performance, not trajectory tracking. There is also some prior work on classification of external perturbations, such as under cyclic system motion @cite_3 or in collision @cite_20 . Here, perturbation is not identified with an existing model, but co-occurs during identification, and is separated from the underlying model within the proposed framework. | {
"cite_N": [
"@cite_21",
"@cite_20",
"@cite_3",
"@cite_2"
],
"mid": [
"2079868754",
"1601495401",
"2095146110",
""
],
"abstract": [
"Accurate dynamic models can be very difficult to compute analytically for complex robots; moreover, using a precomputed fixed model does not allow to cope with unexpected changes in the system. An interesting alternative solution is to learn such models from data, and keep them up-to-date through online adaptation. In this paper we consider the problem of learning the robot inverse dynamic model under dynamically varying contexts: the robot learns incrementally and autonomously the model under different conditions, represented by the manipulation of objects of different weights, that change the dynamics of the system. The inverse dynamic mapping is modeled as a multi-valued function, in which different outputs for the same input query are related to different dynamic contexts (i.e. different manipulated objects). The mapping is estimated using IMLE, a recent online learning algorithm for multi-valued regression, and used for Computed Torque control. No information is given about the context switch during either learning or control, nor any assumption is made about the kind of variation in the dynamics imposed by a new contexts. Experimental results with the iCub humanoid robot are provided.",
"Detecting and interpreting contacts is a crucial aspect of physical Human-Robot Interaction. In order to discriminate between intended and unintended contact types, we derive a set of linear and non-linear features based on physical contact model insights and from observing real impact data that may even rely on proprioceptive sensation only. We implement a classification system with a standard non-linear Support Vector Machine and show empirically both in simulations and on a real robot the high accuracy in off- as well as on-line settings of the system. We argue that these successful results are based on our feature design derived from first principles.",
"In many settings, e.g. physical human-robot interaction, robotic behavior must be made robust against more or less spontaneous application of external forces. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach suitable for more common, although often noisy sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system. It is therefore well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations. We demonstrate the feasibility of our approach with an example where physical forces are exerted on a humanoid robot during walking. In a training phase, a snapshot based DMD model for behavior specific parameter configurations is learned. During task execution the robot must detect and estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes and show that the former outperforms the latter particularly in the presence of sensor noise. We conclude that DMD which has so far been mostly used in other fields of science, particularly fluid mechanics, is also a highly promising method for robotics.",
""
]
} |
1901.03895 | 2910901898 | We present a novel predictive model architecture based on the principles of predictive coding that enables open loop prediction of future observations over extended horizons. There are two key innovations. First, whereas current methods typically learn to make long-horizon open-loop predictions using a multi-step cost function, we instead run the model open loop in the forward pass during training. Second, current predictive coding models initialize the representation layer's hidden state to a constant value at the start of an episode, and consequently typically require multiple steps of interaction with the environment before the model begins to produce accurate predictions. Instead, we learn a mapping from the first observation in an episode to the hidden state, allowing the trained model to immediately produce accurate predictions. We compare the performance of our architecture to a standard predictive coding model and demonstrate the ability of the model to make accurate long horizon open-loop predictions of simulated Doppler radar altimeter readings during a six degree of freedom Mars landing. Finally, we demonstrate a 2X reduction in sample complexity by using the model to implement a Dyna style algorithm to accelerate policy learning with proximal policy optimization. | Recent work in developing predictive models include @cite_14 , where a model learns to predict future video frames by observing sequences of observation and actions. The model is then used to generate robotic trajectories using model predictive control (MPC), choosing the trajectory that ends with an image that best matches a user specified goal image. The action conditional architecture of @cite_7 has proven successful in open loop prediction of long sequences of rendered frames from simulated Atari games. Predictive coding @cite_3 has been been applied to predicting images from objects that are sequentially rotated, and has been used to predict steering angles from video frames captured from a car mounted camera. @cite_9 and @cite_9 extend the PredNet architecture described in @cite_3 . | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_3",
"@cite_7"
],
"mid": [
"2768975186",
"2953317238",
"2401640538",
""
],
"abstract": [
"The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical frames, where spatial appearances and temporal variations are two crucial structures. This paper models these structures by presenting a predictive recurrent neural network (PredRNN). This architecture is enlightened by the idea that spatiotemporal predictive learning should memorize both spatial appearances and temporal variations in a unified memory pool. Concretely, memory states are no longer constrained inside each LSTM unit. Instead, they are allowed to zigzag in two directions: across stacked RNN layers vertically and through all RNN states horizontally. The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously. PredRNN achieves the state-of-the-art prediction performance on three video prediction datasets and is a more general framework, that can be easily extended to other predictive learning tasks by integrating with other architectures.",
"A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.",
"While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (\"PredNet\") architecture that is inspired by the concept of \"predictive coding\" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. Altogether, these results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.",
""
]
} |
1901.03895 | 2910901898 | We present a novel predictive model architecture based on the principles of predictive coding that enables open loop prediction of future observations over extended horizons. There are two key innovations. First, whereas current methods typically learn to make long-horizon open-loop predictions using a multi-step cost function, we instead run the model open loop in the forward pass during training. Second, current predictive coding models initialize the representation layer's hidden state to a constant value at the start of an episode, and consequently typically require multiple steps of interaction with the environment before the model begins to produce accurate predictions. Instead, we learn a mapping from the first observation in an episode to the hidden state, allowing the trained model to immediately produce accurate predictions. We compare the performance of our architecture to a standard predictive coding model and demonstrate the ability of the model to make accurate long horizon open-loop predictions of simulated Doppler radar altimeter readings during a six degree of freedom Mars landing. Finally, we demonstrate a 2X reduction in sample complexity by using the model to implement a Dyna style algorithm to accelerate policy learning with proximal policy optimization. | As for using predictive models to accelerate reinforcement learning, in @cite_1 , the authors use an action conditional model to predict the future states of agents operating in various openAI gym environments with high dimensional state spaces, and use the model in a model predictive control algorithm that quickly learns the tasks at a relatively low level of performance. The model is then used to create a dataset of trajectories to pre-train a TRPO policy @cite_11 , which then achieves a high level of performance on the task through continued model-free policy optimization. The authors report a 3 to 5 times reduction in sample efficiency using the combined model free and model based algorithm. The approach assumes that a ground truth reward function is known. | {
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2743381431",
"2949608212"
],
"abstract": [
"Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents. Videos can be found at this https URL",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters."
]
} |
1901.03895 | 2910901898 | We present a novel predictive model architecture based on the principles of predictive coding that enables open loop prediction of future observations over extended horizons. There are two key innovations. First, whereas current methods typically learn to make long-horizon open-loop predictions using a multi-step cost function, we instead run the model open loop in the forward pass during training. Second, current predictive coding models initialize the representation layer's hidden state to a constant value at the start of an episode, and consequently typically require multiple steps of interaction with the environment before the model begins to produce accurate predictions. Instead, we learn a mapping from the first observation in an episode to the hidden state, allowing the trained model to immediately produce accurate predictions. We compare the performance of our architecture to a standard predictive coding model and demonstrate the ability of the model to make accurate long horizon open-loop predictions of simulated Doppler radar altimeter readings during a six degree of freedom Mars landing. Finally, we demonstrate a 2X reduction in sample complexity by using the model to implement a Dyna style algorithm to accelerate policy learning with proximal policy optimization. | @cite_2 accelerates learning a state-action value function by using a neural network based model to generate synthetic rollouts up to a horizon of H steps. The rewards beyond H steps are replaced by the current estimate of the value of the penultimate observation in the synthetic rollouts, as in an H-step temporal difference estimate. The accelerated Q function learning is then used to improve the sample efficiency of the DDPG algorithm @cite_4 . Again, the algorithm assumes that the ground-truth reward function is available. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2173248099",
"2789824229"
],
"abstract": [
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Recent model-free reinforcement learning algorithms have proposed incorporating learned dynamics models as a source of additional data with the intention of reducing sample complexity. Such methods hold the promise of incorporating imagined data coupled with a notion of model uncertainty to accelerate the learning of continuous control tasks. Unfortunately, they rely on heuristics that limit usage of the dynamics model. We present model-based value expansion, which controls for uncertainty in the model by only allowing imagination to fixed depth. By enabling wider use of learned dynamics models within a model-free reinforcement learning algorithm, we improve value estimation, which, in turn, reduces the sample complexity of learning."
]
} |
1901.04016 | 2908711039 | Our middleware approach, Context-Oriented Software Middleware (COSM), supports context-dependent software with self-adaptability and dependability in a mobile computing environment. The COSM-middleware is a generic and platform-independent adaptation engine, which performs a runtime composition of the software's context-dependent behaviours based on the execution contexts. Our middleware distinguishes between the context-dependent and context-independent functionality of software systems. This enables the COSM-middleware to adapt the application behaviour by composing a set of context-oriented components, that implement the context-dependent functionality of the software. Accordingly, the software dependability is achieved by considering the functionality of the COSM-middleware and the adaptation impact costs. The COSM-middleware uses a dynamic policy-based engine to evaluate the adaptation outputs and verify the fitness of the adaptation output with the application's objectives, goals and the architecture quality attributes. These capabilities are demonstrated through an empirical evaluation of a case study implementation. | For a more complex context-aware system, the same context information would be triggered in different parts of an application and would trigger the invocation of additional behaviours. In this way, context handling becomes a concern that spans several application units, essentially crosscutting into the main application execution. A programming paradigm aiming at handling such crosscutting concerns (referred to as aspects) is AOP @cite_4 . DAOP has emerged to enforce Separation of concerns and support runtime adaptations through weaving code blocks in the application execution @cite_17 . The assumptions made by the COP and AOP approaches, i.e. that the developer knows all the possible software adaptations in advance and designs the application accordingly, is not sufficient to fulfil this need. In addition, in COP and AOP @cite_26 , the context model and the adaptation logic are explicitly hard-coded in the application's business code @cite_11 ; this often leads to poor scalability and maintainability @cite_10 . In contrast, cosd separates the context model and the adaptation logic from the application code, which provides the software with the ability to adapt different context models at runtime without maintaining or modifying the application's business code @cite_25 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_17",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2161291379",
"",
"1970558557",
"2109097634",
"2091287679",
"1978875190"
],
"abstract": [
"Context-aware applications behave differently depending on the context in which they are running. Since context-specific behavior tends to crosscut base programs, it can advantageously be implemented as aspects. This leads to the notion of context-aware aspects, i.e., aspects whose behavior depends on context. This paper analyzes the issue of appropriate support from the aspect language to both restrict the scope of aspects according to the context and allow aspect definitions to access information associated to the context. We propose an open framework for context-aware aspects that allows for the definition of first-class contexts and supports the definition of context awareness constructs for aspects, including the ability to refer to past contexts, and to provide domain- and application-specific constructs.",
"",
"When using Aspect Oriented Programming in the development of software components, a developer must understand the program units actually changed by weaving, how they behave, and possibly correct the aspects used. Support for rapid AOP prototyping and debugging is therefore crucial in such situations. Rapid prototyping is difficult with current aspect weaving tools because they do not support dynamic changes. This paper describes PROSE (PROgrammable extenSions of sErvices), a platform based on Java which addresses dynamic AOP. Aspects are expressed in the same source language as the application (Java), and PROSE allows aspects to be woven, unwoven, or replaced at run-time.",
"Context constitutes an essential part of service behaviour, especially when interaction with end-users is involved. As observed from the literature, context handling in service engineering has been during recent years a field of intense research, which has produced several interesting approaches. In this paper, we present research efforts that attempt mainly to decouple context handling from the service logic. We enumerate all context management categories, but focus on the most appropriate for service engineering, namely source code level, model-driven and message interception, taking also into account the fact that these have not been dealt with in detail in other surveys. A representative example is used to illustrate more precisely how these approaches can be used. Finally, all three categories are compared based on a number of criteria.",
"Context-oriented programming is an emerging technique that enables dynamic behaviour variation based on context changes. In COP, context can be handled directly at the code level by enriching the business logic of the application with code fragments responsible for performing context manipulation, thus providing the application code with the required adaptive behavior. Unfortunately, the whole set of sensors, effectors, and adaptation processes is mixed with the application code, which often leads to poor scalability and maintainability. In addition, the developers have to surround all probable behavior inside the source code. As an outcome, the anticipated adjustment is restricted to the amount of code stubs on hand offered by the creators. Context-driven adaptation requires dynamic composition of context-dependent parts. This can be achieved trough the support of a component model that encapsulates the context-dependent functionality and decouples them from the application's core-functionality. The complexity behind modeling the context-dependent functionality lies in the fact that they can occur separately or in any combination, and cannot be encapsulated because of their impact across all the software modules. Before encapsulating crosscutting context-dependent functionality into a software module, the developers must first identify them in the requirements documents. This requires a formal development paradigm for analyzing the context-dependent functionality; and a component model, which modularizes their concerns. COCA-MDA is proposed in this article as model driven architecture for constructing self-adaptive application from a context oriented component model. Index Terms— Adaptable middleware, Context oriented component, Self-adaptive application, Object.",
"Context-oriented programming (COP) provides dedicated support for defining and composing variations to a basic program behavior. A variation, which is defined within a layer, can be de- activated for the dynamic extent of a code block. While this mechanism allows for control flow-specific scoping, expressing behavior adaptations can demand alternative scopes. For instance, adaptations can depend on dynamic object structure rather than control flow. We present scenarios for behavior adaptation and identify the need for new scoping mechanisms. The increasing number of scoping mechanisms calls for new language abstractions representing them. We suggest to open the implementation of scoping mechanisms so that developers can extend the COP language core according to their specific needs. Our open implementation moves layer composition into objects to be affected and with that closer to the method dispatch to be changed. We discuss the implementation of established COP scoping mechanisms using our approach and present new scoping mechanisms developed for our enhancements to Lively Kernel."
]
} |
1907.05628 | 2957359913 | Disease-gene prediction (DGP) refers to the computational challenge of predicting associations between genes and diseases. Effective solutions to the DGP problem have the potential to accelerate the therapeutic development pipeline at early stages via efficient prioritization of candidate genes for various diseases. In this work, we introduce the variational graph auto-encoder (VGAE) as a promising unsupervised approach for learning powerful latent embeddings in disease-gene networks that can be used for the DGP problem, the first approach using a generative model involving graph neural networks (GNNs). In addition to introducing the VGAE as a promising approach to the DGP problem, we further propose an extension (constrained-VGAE or C-VGAE) which adapts the learning algorithm for link prediction between two distinct node types in heterogeneous graphs. We evaluate and demonstrate the effectiveness of the VGAE on general link prediction in a disease-gene association network and the C-VGAE on disease-gene prediction in the same network, using popular random walk driven methods as baselines. While the methodology presented demonstrates potential solely based on utilizing the topology of a disease-gene association network, it can be further enhanced and explored through the integration of additional biological networks such as gene protein interaction networks and additional biological features pertaining to the diseases and genes represented in the disease-gene association network. | The foundation of linkage methods is built on the combination of candidate gene identification through network community analysis, knowledge of chromosomal locations of the candidate genes, and knowledge of certain disease loci (regions of the genome that are likely to be associated with a particular disease). In @cite_14 , candidate genes are identified through the analysis of a protein interaction network. Candidate genes are characterized by having an interaction with another gene (via corresponding proteins) known to have an association with a particular disease. From this perspective, a gene can qualify as a candidate if and only if it interacts with another gene that is known to have an association with the disease of interest, inherently limiting the scope of the search for candidate genes. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2020301095"
],
"abstract": [
"Background: The responsible genes have not yet been identified for many genetically mapped disease loci. Physically interacting proteins tend to be involved in the same cellular process, and mutations in their genes may lead to similar disease phenotypes. Objective: To investigate whether protein–protein interactions can predict genes for genetically heterogeneous diseases. Methods: 72 940 protein–protein interactions between 10 894 human proteins were used to search 432 loci for candidate disease genes representing 383 genetically heterogeneous hereditary diseases. For each disease, the protein interaction partners of its known causative genes were compared with the disease associated loci lacking identified causative genes. Interaction partners located within such loci were considered candidate disease gene predictions. Prediction accuracy was tested using a benchmark set of known disease genes. Results: Almost 300 candidate disease gene predictions were made. Some of these have since been confirmed. On average, 10 or more are expected to be genuine disease genes, representing a 10-fold enrichment compared with positional information only. Examples of interesting candidates are AKAP6 for arrythmogenic right ventricular dysplasia 3 and SYN3 for familial partial epilepsy with variable foci. Conclusions: Exploiting protein–protein interactions can greatly increase the likelihood of finding positional candidate disease genes. When applied on a large scale they can lead to novel candidate gene predictions."
]
} |
1907.05628 | 2957359913 | Disease-gene prediction (DGP) refers to the computational challenge of predicting associations between genes and diseases. Effective solutions to the DGP problem have the potential to accelerate the therapeutic development pipeline at early stages via efficient prioritization of candidate genes for various diseases. In this work, we introduce the variational graph auto-encoder (VGAE) as a promising unsupervised approach for learning powerful latent embeddings in disease-gene networks that can be used for the DGP problem, the first approach using a generative model involving graph neural networks (GNNs). In addition to introducing the VGAE as a promising approach to the DGP problem, we further propose an extension (constrained-VGAE or C-VGAE) which adapts the learning algorithm for link prediction between two distinct node types in heterogeneous graphs. We evaluate and demonstrate the effectiveness of the VGAE on general link prediction in a disease-gene association network and the C-VGAE on disease-gene prediction in the same network, using popular random walk driven methods as baselines. While the methodology presented demonstrates potential solely based on utilizing the topology of a disease-gene association network, it can be further enhanced and explored through the integration of additional biological networks such as gene protein interaction networks and additional biological features pertaining to the diseases and genes represented in the disease-gene association network. | Module-based methods are built on the concept that biological networks can be divided into modules or neighborhoods, which are roughly characterized by topological proximity. Candidate genes for a particular disease are assumed to belong to the same module as other genes that are known to be linked to that same disease. From the vantage point of pure network analysis, such a task can be framed as a community detection problem however, empirical analysis shows that genes proteins that are associated with a particular disease do not always form dense subgraphs, although they may signify areas of functional similarity @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2110553827"
],
"abstract": [
"The observation that disease associated proteins often interact with each other has fueled the development of network-based approaches to elucidate the molecular mechanisms of human disease. Such approaches build on the assumption that protein interaction networks can be viewed as maps in which diseases can be identified with localized perturbation within a certain neighborhood. The identification of these neighborhoods, or disease modules, is therefore a prerequisite of a detailed investigation of a particular pathophenotype. While numerous heuristic methods exist that successfully pinpoint disease associated modules, the basic underlying connectivity patterns remain largely unexplored. In this work we aim to fill this gap by analyzing the network properties of a comprehensive corpus of 70 complex diseases. We find that disease associated proteins do not reside within locally dense communities and instead identify connectivity significance as the most predictive quantity. This quantity inspires the design of a novel Disease Module Detection (DIAMOnD) algorithm to identify the full disease module around a set of known disease proteins. We study the performance of the algorithm using well-controlled synthetic data and systematically validate the identified neighborhoods for a large corpus of diseases."
]
} |
1907.05628 | 2957359913 | Disease-gene prediction (DGP) refers to the computational challenge of predicting associations between genes and diseases. Effective solutions to the DGP problem have the potential to accelerate the therapeutic development pipeline at early stages via efficient prioritization of candidate genes for various diseases. In this work, we introduce the variational graph auto-encoder (VGAE) as a promising unsupervised approach for learning powerful latent embeddings in disease-gene networks that can be used for the DGP problem, the first approach using a generative model involving graph neural networks (GNNs). In addition to introducing the VGAE as a promising approach to the DGP problem, we further propose an extension (constrained-VGAE or C-VGAE) which adapts the learning algorithm for link prediction between two distinct node types in heterogeneous graphs. We evaluate and demonstrate the effectiveness of the VGAE on general link prediction in a disease-gene association network and the C-VGAE on disease-gene prediction in the same network, using popular random walk driven methods as baselines. While the methodology presented demonstrates potential solely based on utilizing the topology of a disease-gene association network, it can be further enhanced and explored through the integration of additional biological networks such as gene protein interaction networks and additional biological features pertaining to the diseases and genes represented in the disease-gene association network. | Diffusion methods start with a set of seed genes which are known to be associated with a disease, however rather than computing a connectivity significance for each candidate gene in the network, random walk methods are used to find novel candidate genes @cite_4 @cite_17 @cite_24 @cite_18 @cite_9 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_24",
"@cite_17"
],
"mid": [
"2100582962",
"2038122249",
"1981949323",
"2097498614",
"2119220491"
],
"abstract": [
"Motivation: Clinical diseases are characterized by distinct phenotypes. To identify disease genes is to elucidate the gene–phenotype relationships. Mutations in functionally related genes may result in similar phenotypes. It is reasonable to predict disease-causing genes by integrating phenotypic data and genomic data. Some genetic diseases are genetically or phenotypically similar. They may share the common pathogenetic mechanisms. Identifying the relationship between diseases will facilitate better understanding of the pathogenetic mechanism of diseases. Results: In this article, we constructed a heterogeneous network by connecting the gene network and phenotype network using the phenotype–gene relationship information from the OMIM database. We extended the random walk with restart algorithm to the heterogeneous network. The algorithm prioritizes the genes and phenotypes simultaneously. We use leave-one-out cross-validation to evaluate the ability of finding the gene–phenotype relationship. Results showed improved performance than previous works. We also used the algorithm to disclose hidden disease associations that cannot be found by gene network or phenotype network alone. We identified 18 hidden disease associations, most of which were supported by literature evidence. Availability: The MATLAB code of the program is available at http: www3.ntu.edu.sg home aspatra research Yongjin_BI2010.zip Contact: yongjin.li@gmail.com Supplementary information:Supplementary data are available at Bioinformatics online.",
"A fundamental challenge in human health is the identification of disease-causing genes. Recently, several studies have tackled this challenge via a network-based approach, motivated by the observation that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein or functional interactions. However, most of these approaches use only local network information in the inference process and are restricted to inferring single gene associations. Here, we provide a global, network-based method for prioritizing disease genes and inferring protein complex associations, which we call PRINCE. The method is based on formulating constraints on the prioritization function that relate to its smoothness over the network and usage of prior information. We exploit this function to predict not only genes but also protein complex associations with a disease of interest. We test our method on gene-disease association data, evaluating both the prioritization achieved and the protein complexes inferred. We show that our method outperforms extant approaches in both tasks. Using data on 1,369 diseases from the OMIM knowledgebase, our method is able (in a cross validation setting) to rank the true causal gene first for 34 of the diseases, and infer 139 disease-related complexes that are highly coherent in terms of the function, expression and conservation of their member proteins. Importantly, we apply our method to study three multi-factorial diseases for which some causal genes have been found already: prostate cancer, alzheimer and type 2 diabetes mellitus. PRINCE's predictions for these diseases highly match the known literature, suggesting several novel causal genes and protein complexes for further investigation.",
"Reductionism, as a paradigm, is expired, and complexity, as a field, is tired. Data-based mathematical models of complex systems are offering a fresh perspective, rapidly developing into a new discipline: network science.",
"Deciphering the genetic basis of human diseases is an important goal of biomedical research. On the basis of the assumption that phenotypically similar diseases are caused by functionally related genes, we propose a computational framework that integrates human protein–protein interactions, disease phenotype similarities, and known gene–phenotype associations to capture the complex relationships between phenotypes and genotypes. We develop a tool named CIPHER to predict and prioritize disease genes, and we show that the global concordance between the human protein network and the phenotype network reliably predicts disease genes. Our method is applicable to genetically uncharacterized phenotypes, effective in the genome-wide scan of disease genes, and also extendable to explore gene cooperativity in complex diseases. The predicted genetic landscape of over 1000 human phenotypes, which reveals the global modular organization of phenotype–genotype relationships. The genome-wide prioritization of candidate genes for over 5000 human phenotypes, including those with under-characterized disease loci or even those lacking known association, is publicly released to facilitate future discovery of disease genes.",
"The identification of genes associated with hereditary disorders has contributed to improving medical care and to a better understanding of gene functions, interactions, and pathways. However, there are well over 1500 Mendelian disorders whose molecular basis remains unknown. At present, methods such as linkage analysis can identify the chromosomal region in which unknown disease genes are located, but the regions could contain up to hundreds of candidate genes. In this work, we present a method for prioritization of candidate genes by use of a global network distance measure, random walk analysis, for definition of similarities in protein-protein interaction networks. We tested our method on 110 disease-gene families with a total of 783 genes and achieved an area under the ROC curve of up to 98 on simulated linkage intervals of 100 genes surrounding the disease gene, significantly outperforming previous methods based on local distance measures. Our results not only provide an improved tool for positional-cloning projects but also add weight to the assumption that phenotypically similar diseases are associated with disturbances of subnetworks within the larger protein interactome that extend beyond the disease proteins themselves."
]
} |
1907.05538 | 2960971247 | We present a novel framework for collaboration amongst a team of robots performing Pose Graph Optimization (PGO) that addresses two important challenges for multi-robot SLAM: that of enabling information exchange "on-demand" via active rendezvous, and that of rejecting outlier measurements with high probability. Our key insight is to exploit relative position data present in the communication channel between agents, as an independent measurement in PGO. We show that our algorithmic and experimental framework for integrating Channel State Information (CSI) over the communication channels, with multi-agent PGO, addresses the two open challenges of enabling information exchange and rejecting outliers. Our presented framework is distributed and applicable in low-lighting or featureless environments where traditional sensors often fail. We present extensive experimental results on actual robots showing both the use of active rendezvous resulting in error reduction by 6X as compared to randomly occurring rendezvous and the use of CSI observations providing a reduction in ground truth pose estimation errors of 32 . These results demonstrate the promise of using a combination of multi-robot coordination and CSI to address challenges in multi-agent localization and mapping -- providing an important step towards integrating communication as a novel sensor for SLAM tasks. | As noted in @cite_19 , Pose Graph Optimization (PGO) is one of the most common modern techniques for SLAM, and supports many crucial tasks in robotics. A variety of techniques exist to extend PGO methods to multi-robot mapping problems. choudhary2017distributed address the limited communication challenge in @cite_17 by exchanging semantic information between agents to avoid exchanging raw sensor data. In @cite_6 , convex optimization for graph sparsification is used to address the challenges of acoustic communication channels. Techniques for reducing the amount of poses considered are leveraged in @cite_5 @cite_2 @cite_1 to reduce bandwidth requirements. However, these techniques still suffer from outlier observations and the inability to coordinate higher rates of information exchange on demand. | {
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_17"
],
"mid": [
"",
"1603426217",
"2461937780",
"2091750321",
"2102233102",
"2593712938"
],
"abstract": [
"",
"Multi-robot deployments have the potential for completing tasks more efficiently. For example, in simultaneous localization and mapping (SLAM), robots can better localize themselves and the map if they can share measurements of each other (direct encounters) and of commonly observed parts of the map (indirect encounters). However, performance is contingent on the quality of the communications channel. In the underwater scenario, communicating over any appreciable distance is achieved using acoustics which is low-bandwidth, slow, and unreliable, making cooperative operations very challenging. In this paper, we present a framework for cooperative SLAM (C-SLAM) for multiple autonomous underwater vehicles (AUVs) communicating only through acoustics. We develop a novel graph-based C-SLAM algorithm that is able to (optimally) generate communication packets whose size scales linearly with the number of observed features since the last successful transmission, constantly with the number of vehicles in the collective, and does not grow with time even the case of dropped packets, which are common. As a result, AUVs can bound their localization error without the need for pre-installed beacons or surfacing for GPS fixes during navigation, leading to significant reduction in time required to complete missions. The proposed algorithm is validated through realistic marine vehicle and acoustic communication simulations.",
"Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?",
"In graph-based simultaneous localization and mapping (SLAM), the pose graph grows over time as the robot gathers information about the environment. An ever growing pose graph, however, prevents long-term mapping with mobile robots. In this paper, we address the problem of efficient information-theoretic compression of pose graphs. Our approach estimates the mutual information between the laser measurements and the map to discard the measurements that are expected to provide only a small amount of information. Our method subsequently marginalizes out the nodes from the pose graph that correspond to the discarded laser measurements. To maintain a sparse pose graph that allows for efficient map optimization, our approach applies an approximate marginalization technique that is based on Chow-Liu trees. Our contributions allow the robot to effectively restrict the size of the pose graph. Alternatively, the robot is able to maintain a pose graph that does not grow unless the robot explores previously unobserved parts of the environment. Real-world experiments demonstrate that our approach to pose graph compression is well suited for long-term mobile robot mapping.",
"In this paper we focus on the multi-robot perception problem, and present an experimentally validated end-to-end multi-robot mapping framework, enabling individual robots in a team to see beyond their individual sensor horizons. The inference part of our system is the DDF-SAM algorithm [1], which provides a decentralized communication and inference scheme, but did not address the crucial issue of data association. One key contribution is a novel, RANSAC-based, approach for performing the between-robot data associations and initialization of relative frames of reference. We demonstrate this system with both data collected from real robot experiments, as well as in a large scale simulated experiment demonstrating the scalability of the proposed approach.",
"We consider the following problem: a team of robots is deployed in an unknown environment and it has to collaboratively build a map of the area without a reliable infrastructure for communication. The backbone for modern mapping techniques is pose graph optimization, which estimates the trajectory of the robots, from which the map can be easily built. The first contribution of this paper is a set of distributed algorithms for pose graph optimization: rather than sending all sensor data to a remote sensor fusion server, the robots exchange very partial and noisy information to reach an agreement on the pose graph configuration. Our approach can be considered as a distributed implementation of a two-stage approach that already exists, where we use the Successive Over-Relaxation and the Jacobi Over-Relaxation as workhorses to split the computation among the robots. We also provide conditions under which the proposed distributed protocols converge to the solution of the centralized two-stage approach. As a se..."
]
} |
1907.05538 | 2960971247 | We present a novel framework for collaboration amongst a team of robots performing Pose Graph Optimization (PGO) that addresses two important challenges for multi-robot SLAM: that of enabling information exchange "on-demand" via active rendezvous, and that of rejecting outlier measurements with high probability. Our key insight is to exploit relative position data present in the communication channel between agents, as an independent measurement in PGO. We show that our algorithmic and experimental framework for integrating Channel State Information (CSI) over the communication channels, with multi-agent PGO, addresses the two open challenges of enabling information exchange and rejecting outliers. Our presented framework is distributed and applicable in low-lighting or featureless environments where traditional sensors often fail. We present extensive experimental results on actual robots showing both the use of active rendezvous resulting in error reduction by 6X as compared to randomly occurring rendezvous and the use of CSI observations providing a reduction in ground truth pose estimation errors of 32 . These results demonstrate the promise of using a combination of multi-robot coordination and CSI to address challenges in multi-agent localization and mapping -- providing an important step towards integrating communication as a novel sensor for SLAM tasks. | Additionally, there is a large body of work focused on light-weight position estimation from Radio Frequency (RF) signals. For instance, RF localization has been successfully applied to positioning when access to GPS signals are unavailable, such as indoor environments @cite_10 @cite_8 , but requires the presence of known anchors. Decentralized localization without the use of beacons or anchor points can be achieved through range @cite_20 and vectorial distance estimation @cite_29 via time-of-arrival and time-difference-of-arrival techniques for multilateration. Angle-of-arrival measurements can also be used for estimating orientation @cite_26 . Synthetic aperture radar (SAR) techniques have been used for indoor positioning in the presence of multipath and alleviate the need for bulky multi-antenna arrays @cite_0 @cite_24 @cite_16 @cite_25 , which shows great promise for robotic applications. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_29",
"@cite_0",
"@cite_24",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2065860562",
"2124822240",
"2141775911",
"2013503669",
"2117909688",
"",
"2169126686",
"2158090937",
"2001813029"
],
"abstract": [
"We develop a novel localization theory for planar networks of nodes that measure each other?s relative position, i.e., we assume that nodes do not have the ability to perform measurements expressed in a common reference frame. We begin with some basic definitions of frame localizability and orientation localizability. Based on some key kinematic relationships, we characterize orientation localizability for networks with angle-of-arrival sensing. We then address the orientation localization problem in the presence of noisy measurements. Our first algorithm computes a least-square estimate of the unknown node orientations in a ring network given angle-of-arrival sensing. For arbitrary connected graphs, our second algorithm exploits kinematic relationships among the orientation of node in loops in order to reduce the effect of noise. We establish the convergence of the algorithm, and through some simulations we show that the algorithm reduces the mean-square error due to the noisy measurements.",
"In this paper, we present a robust, decentralized approach to RF-based location tracking. Our system, called MoteTrack, is based on low-power radio transceivers coupled with a modest amount of computation and storage capabilities. MoteTrack does not rely upon any back-end server or network infrastructure: the location of each mobile node is computed using a received radio signal strength signature from numerous beacon nodes to a database of signatures that is replicated across the beacon nodes themselves. This design allows the system to function despite significant failures of the radio beacon infrastructure. In our deployment of MoteTrack, consisting of 20 beacon nodes distributed across our Computer Science building, we achieve a 50th percentile and 80th percentile location-tracking accuracy of 2 meters and 3 meters respectively. In addition, MoteTrack can tolerate the failure of up to 60 of the beacon nodes without severely degrading accuracy, making the system suitable for deployment in highly volatile conditions. We present a detailed analysis of MoteTrack's performance under a wide range of conditions, including variance in the number of obstructions, beacon node failure, radio signature perturbations, receiver sensitivity, and beacon node density.",
"In this work we address the problem of optimal estimating the position of each agent in a network from relative noisy vectorial distances with its neighbors. Although the problem can be cast as a standard least-squares problem, the main challenge is to devise scalable algorithms that allow each agent to estimate its own position by means of only local communication and bounded complexity, independently of the network size and topology. We propose a gradient based algorithm that is guaranteed to have exponentially convergence rate to the optimal centralized least-square solution. Moreover we show the convergence also in presence of bounded delays and packet losses. We finally provide numerical results to support our work.",
"Recent years have seen the advent of new RF-localization systems that demonstrate tens of centimeters of accuracy. However, such systems require either deployment of new infrastructure, or extensive fingerprinting of the environment through training or crowdsourcing, impeding their wide-scale adoption. We present Ubicarse, an accurate indoor localization system for commodity mobile devices, with no specialized infrastructure or fingerprinting. Ubicarse enables handheld devices to emulate large antenna arrays using a new formulation of Synthetic Aperture Radar (SAR). Past work on SAR requires measuring mechanically controlled device movement with millimeter precision, far beyond what commercial accelerometers can provide. In contrast, Ubicarse's core contribution is the ability to perform SAR on handheld devices twisted by their users along unknown paths. Ubicarse is not limited to localizing RF devices; it combines RF localization with stereo-vision algorithms to localize common objects with no RF source attached to them. We implement Ubicarse on a HP SplitX2 tablet and empirically demonstrate a median error of 39 cm in 3-D device localization and 17 cm in object geotagging in complex indoor settings.",
"Despite the rapid growth of next-generation cellular networks, researchers and end-users today have limited visibility into the performance and problems of these networks. As LTE deployments move towards femto and pico cells, even operators struggle to fully understand the propagation and interference patterns affecting their service, particularly indoors. This paper introduces LTEye, the first open platform to monitor and analyze LTE radio performance at a fine temporal and spatial granularity. LTEye accesses the LTE PHY layer without requiring private user information or provider support. It provides deep insights into the PHY-layer protocols deployed in these networks. LTEye's analytics enable researchers and policy makers to uncover serious deficiencies in these networks due to inefficient spectrum utilization and inter-cell interference. In addition, LTEye extends synthetic aperture radar (SAR), widely used for radar and backscatter signals, to operate over cellular signals. This enables businesses and end-users to localize mobile users and capture the distribution of LTE performance across spatial locations in their facility. As a result, they can diagnose problems and better plan deployment of repeaters or femto cells. We implement LTEye on USRP software radios, and present empirical insights and analytics from multiple AT&T and Verizon base stations in our locality.",
"",
"Localization, finding the coordinates of an object with respect to other objects with known coordinates—hereinafter, referred to as anchors, is a nonlinear problem, as it involves solving circle equations when relating distances to Cartesian coordinates, or, computing Cartesian coordinates from angles using the law of sines. This nonlinear problem has been a focus of significant attention over the past two centuries and the progress follows closely with the advances in instrumentation as well as applied mathematics, geometry, statistics, and signal processing. The Internet-of-Things (IoT), with massive deployment of wireless tagged things, has renewed the interest and activity in finding novel, expert, and accurate indoor self-localization methods, where a particular emphasis is on distributed approaches. This paper is dedicated to reviewing a notable alternative to the nonlinear localization problem, i.e., a linear-convex method, based on ’s work. This linear solution utilizes relatively unknown geometric concepts in the context of localization problems, i.e., the barycentric coordinates and the Cayley–Menger determinants. Specifically, in an @math -dimensional Euclidean space, a set of @math anchors, objects with known locations, is sufficient (and necessary) to localize an arbitrary collection of objects with unknown locations—hereinafter, referred to as sensors, with a linear-iterative algorithm. To ease the presentation, we discuss the solution under a structural convexity condition, namely, the sensors lie inside the convex hull of at least @math anchors. Although rigorous results are included, several remarks and discussion throughout this paper provide the intuition behind the solution and are primarily aimed toward researchers and practitioners interested in learning about this challenging field of research. Additional figures and demos have been added as auxiliary material to support this aim.",
"RFIDs are emerging as a vital component of the Internet of Things. In 2012, billions of RFIDs have been deployed to locate equipment, track drugs, tag retail goods, etc. Current RFID systems, however, can only identify whether a tagged object is within radio range (which could be up to tens of meters), but cannot pinpoint its exact location. Past proposals for addressing this limitation rely on a line-of-sight model and hence perform poorly when faced with multipath effects or non-line-of-sight, which are typical in real-world deployments. This paper introduces the first fine-grained RFID positioning system that is robust to multipath and non-line-of-sight scenarios. Unlike past work, which considers multipath as detrimental, our design exploits multipath to accurately locate RFIDs. The intuition underlying our design is that nearby RFIDs experience a similar multipath environment (e.g., reflectors in the environment) and thus exhibit similar multipath profiles. We capture and extract these multipath profiles by using a synthetic aperture radar (SAR) created via antenna motion. We then adapt dynamic time warping (DTW) techniques to pinpoint a tag's location. We built a prototype of our design using USRP software radios. Results from a deployment of 200 commercial RFIDs in our university library demonstrate that the new design can locate misplaced books with a median accuracy of 11 cm.",
"This paper considers the noisy range-only network localization problem in which measurements of relative distances between agents are used to estimate their positions in networked systems. When distance information is noisy, existence and uniqueness of location solution are usually not guaranteed. It is well known that in presence of distance measurement noise, a node may have discontinuous deformations (e.g., flip ambiguities and discontinuous flex ambiguities). Thus, there are two issues that we consider in the noisy localization problem. The first one is the location estimate error propagated from distance measurement noise. We compare two kinds of analytical location error computation methods by assuming that each distance is corrupted with independent Gaussian random noise. These analytical results help us to understand effects of the measurement noises on the position estimation accuracy. After that, based on multidimensional scaling theory, we propose a distributed localization algorithm to solve the noisy range network localization problem. Our approach is robust to distance measurement noise, and it can be implemented in any random case without considering the network setup constraints. Moreover, a refined version of distributed noisy range localization method is developed, which achieves a good tradeoff between computational effort and global convergence especially in large-scale networks."
]
} |
1907.05538 | 2960971247 | We present a novel framework for collaboration amongst a team of robots performing Pose Graph Optimization (PGO) that addresses two important challenges for multi-robot SLAM: that of enabling information exchange "on-demand" via active rendezvous, and that of rejecting outlier measurements with high probability. Our key insight is to exploit relative position data present in the communication channel between agents, as an independent measurement in PGO. We show that our algorithmic and experimental framework for integrating Channel State Information (CSI) over the communication channels, with multi-agent PGO, addresses the two open challenges of enabling information exchange and rejecting outliers. Our presented framework is distributed and applicable in low-lighting or featureless environments where traditional sensors often fail. We present extensive experimental results on actual robots showing both the use of active rendezvous resulting in error reduction by 6X as compared to randomly occurring rendezvous and the use of CSI observations providing a reduction in ground truth pose estimation errors of 32 . These results demonstrate the promise of using a combination of multi-robot coordination and CSI to address challenges in multi-agent localization and mapping -- providing an important step towards integrating communication as a novel sensor for SLAM tasks. | As a supplement to some of the challenges of passive estimation, active SLAM approaches attempt to incorporate estimation uncertainty into the control of agents. This can include monitoring the estimation divergence during exploration @cite_28 and estimating information-gain from exploration versus uncertainty reduction via repeated visitations @cite_4 @cite_27 using pose graphs. However, almost all active SLAM approaches rely on an explicit model of the environment. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_4"
],
"mid": [
"1972624884",
"",
"1535541145"
],
"abstract": [
"Autonomous exploration under uncertain robot location requires the robot to use active strategies to trade-off between the contrasting tasks of exploring the unknown scenario and satisfying given constraints on the admissible uncertainty in map estimation. The corresponding problem, namely active SLAM (Simultaneous Localization and Mapping) and exploration, has received a large attention from the robotic community for its relevance in mobile robotics applications. In this work we tackle the problem of active SLAM and exploration with Rao-Blackwellized Particle Filters. We propose an application of Kullback-Leibler divergence for the purpose of evaluating the particle-based SLAM posterior approximation. This metric is then applied in the definition of the expected information from a policy, which allows the robot to autonomously decide between exploration and place revisiting actions (i.e., loop closing). Extensive tests are performed in typical indoor and office environments and on well-known benchmarking scenarios belonging to SLAM literature, with the purpose of comparing the proposed approach with the state-of-the-art techniques and to evaluate the maturity of truly autonomous navigation systems based on particle filtering.",
"",
"We propose a novel method for robotic exploration that evaluates paths that minimize both the joint path and map entropy per meter traveled. The method uses Pose SLAM to update the path estimate, and grows an RRT* tree to generate the set of candidate paths. This action selection mechanism contrasts with previous appoaches in which the action set was built heuristically from a sparse set of candidate actions. The technique favorably compares agains the classical frontier-based exploration and other Active Pose SLAM methods in simulations in a common publicly available dataset."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.