aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1811.07738
2900872212
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery.
Liskowski and Krawiec @cite_12 propose a patch-based approach, where a network that consists of a stack of convolutional layers followed by three fully-connected layers is trained with small patches of the input fundus image. In contrast, Maninis al @cite_35 and Yan al @cite_10 train their networks with complete fundus images.
{ "cite_N": [ "@cite_35", "@cite_10", "@cite_12" ], "mid": [ "2513326255", "2802388893", "2327793514" ], "abstract": [ "This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.", "Objective: Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. Methods: In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Results: Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Conclusion: Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. Significance: The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.", "The condition of the vascular network of human eye is an important diagnostic factor in ophthalmology. Its segmentation in fundus imaging is a nontrivial task due to variable size of vessels, relatively low contrast, and potential presence of pathologies like microaneurysms and hemorrhages. Many algorithms, both unsupervised and supervised, have been proposed for this purpose in the past. We propose a supervised segmentation technique that uses a deep neural network trained on a large (up to 400 @math 000) sample of examples preprocessed with global contrast normalization, zero-phase whitening, and augmented using geometric transformations and gamma corrections. Several variants of the method are considered, including structured prediction, where a network classifies multiple pixels simultaneously. When applied to standard benchmarks of fundus imaging, the DRIVE, STARE, and CHASE databases, the networks significantly outperform the previous algorithms on the area under ROC curve measure (up to @math ) and accuracy of classification (up to @math ). The method is also resistant to the phenomenon of central vessel reflex, sensitive in detection of fine vessels ( @math ), and fares well on pathological cases." ] }
1811.07738
2900872212
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery.
Maninis al @cite_35 extract intermediate feature maps of a VGG-16 network @cite_0 , pretrained on ImageNet, which are upsampled via transposed convolutions and concatenated before applying a final @math convolutional layer. Their method, called DRIU, achieves a state-of-the-art Dice score of 0.822 on DRIVE. Yan al @cite_10 train the U-Net model @cite_32 with a joint-loss by appending two separate branches, one with a pixel-wise and one with a segment-level loss, that are trained simultaneously.
{ "cite_N": [ "@cite_0", "@cite_35", "@cite_10", "@cite_32" ], "mid": [ "1686810756", "2513326255", "2802388893", "2952232639" ], "abstract": [ "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.", "Objective: Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. Methods: In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Results: Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Conclusion: Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. Significance: The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1811.07738
2900872212
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery.
While these supervised methods accomplish good segmentation results, their computational requirements are substantial and as a consequence are commonly implemented on high-performance server-grade GPUs such as the NVIDIA TITAN class GPUs in @cite_12 @cite_35 @cite_10 .
{ "cite_N": [ "@cite_35", "@cite_10", "@cite_12" ], "mid": [ "2513326255", "2802388893", "2327793514" ], "abstract": [ "This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.", "Objective: Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. Methods: In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Results: Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Conclusion: Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. Significance: The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.", "The condition of the vascular network of human eye is an important diagnostic factor in ophthalmology. Its segmentation in fundus imaging is a nontrivial task due to variable size of vessels, relatively low contrast, and potential presence of pathologies like microaneurysms and hemorrhages. Many algorithms, both unsupervised and supervised, have been proposed for this purpose in the past. We propose a supervised segmentation technique that uses a deep neural network trained on a large (up to 400 @math 000) sample of examples preprocessed with global contrast normalization, zero-phase whitening, and augmented using geometric transformations and gamma corrections. Several variants of the method are considered, including structured prediction, where a network classifies multiple pixels simultaneously. When applied to standard benchmarks of fundus imaging, the DRIVE, STARE, and CHASE databases, the networks significantly outperform the previous algorithms on the area under ROC curve measure (up to @math ) and accuracy of classification (up to @math ). The method is also resistant to the phenomenon of central vessel reflex, sensitive in detection of fine vessels ( @math ), and fares well on pathological cases." ] }
1811.07738
2900872212
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery.
Additionally they either fail to reach the performance of unsupervised methods on very high-resolution datasets or, as a result of their computational requirements, can only be trained with small patches of the complete input image that fit into memory and thereby further increase the time it takes to segment the complete fundus image. For example, on the high resolution HRF dataset, the unsupervised method introduced by Annunziata al @cite_33 achieves a state-of-the art Dice score of 0.7578, while the best-performing supervised method @cite_10 achieves a Dice score of 0.7212.
{ "cite_N": [ "@cite_10", "@cite_33" ], "mid": [ "2802388893", "595827679" ], "abstract": [ "Objective: Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. Methods: In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Results: Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Conclusion: Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. Significance: The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.", "Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper, we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called neighborhood estimator before filling, is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62 on the STARE dataset and 95.81 on the HRF dataset." ] }
1811.07793
2901633236
We present (), a coarse-to-fine framework for content-aware image retargeting. Our framework first constructs the semantic structure of input image with a deep convolutional neural network. Then a uniform re-sampling that suits for semantic structure preserving is devised to resize feature maps to target aspect ratio at each feature layer. The final retargeting result is generated by coarse-to-fine nearest neighbor field search and step-by-step nearest neighbor field fusion. We empirically demonstrate the effectiveness of our model with both qualitative and quantitative results on widely used RetargetMe dataset.
Numerous works have been carried out for image retargeting in the past decades. Unlike traditional image retargeting methods, such as uniform Scaling (SCL), recent developments in image retargeting usually seek to change the size of the image while maintaining the important content. By using face detectors @cite_13 or the visual saliency detection methods @cite_9 to detect important regions in the image, one simple way to resize image is using Cropping (CR) to eliminate the unimportant region from the image. However, directly eliminating region by CR may result in information loss. Seam Carving (SC) @cite_26 is proposed to iteratively remove an @math -connected seam in the image to preserve the visual saliency. To avoid drawbacks of using single retargeting method, multi-operator techniques (MULTIOP) @cite_1 @cite_11 combine SC, SCL and CR to resize the image based on the defined optimal energy cost, such as image retargeting quality assessment metrics. pritch2009shift described a Shift-Map (SM) technique to remove or add band regions instead of scaling or stretching images. All the above methods resize the image by removing discrete regions. Other approaches also put attempts on resizing the image in continuous and summarization perspectives.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_1", "@cite_13", "@cite_11" ], "mid": [ "2018377917", "2128272608", "", "2164598857", "2407857922" ], "abstract": [ "Effective resizing of images should not only use geometric constraints, but consider the image content as well. We present a simple image operator called seam carving that supports content-aware image resizing for both reduction and expansion. A seam is an optimal 8-connected path of pixels on a single image from top to bottom, or left to right, where optimality is defined by an image energy function. By repeatedly carving out or inserting seams in one direction we can change the aspect ratio of an image. By applying these operators in both directions we can retarget the image to a new size. The selection and order of seams protect the content of the image, as defined by the energy function. Seam carving can also be used for image content enhancement and object removal. We support various visual saliency measures for defining the energy of an image, and can also include user input to guide the process. By storing the order of seams in an image we create multi-size images, that are able to continuously change in real time to fit a given size.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "", "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "Content-aware image retargeting has attracted substantial research interests in the related research community. However, so far there is still no method can preserve important image contents and structure well without introducing deformation. To address this problem, we propose a Saliency & Structure Preserving Multi-operator (SSPM) method. SSPM classifies images into three categories utilizing SIFT density to improve performance of saliency preservation, helping to mitigate negative influence from center-bias property of most existing saliency detection models. SSPM also employs different principles to improve structure preservation performance, including Earth Mover's Distance (EMD) and Gray-Level Cooccurrence Matrix (GLCM) to get optimal operator sequences for smart content-aware image retargeting. SSPM method not only can well preserve salient contents and structure, but also can greatly improve deformation resilience. Experimental results demonstrated that our method outperforms state-of-art image retargeting methods." ] }
1811.07793
2901633236
We present (), a coarse-to-fine framework for content-aware image retargeting. Our framework first constructs the semantic structure of input image with a deep convolutional neural network. Then a uniform re-sampling that suits for semantic structure preserving is devised to resize feature maps to target aspect ratio at each feature layer. The final retargeting result is generated by coarse-to-fine nearest neighbor field search and step-by-step nearest neighbor field fusion. We empirically demonstrate the effectiveness of our model with both qualitative and quantitative results on widely used RetargetMe dataset.
Summarization based retargeting methods @cite_0 @cite_21 @cite_6 @cite_18 resize image by eliminating insignificant patches and maintaining the coherence between original and retargeted image. simakov2008summarizing measured the bidirectional patch similarity (i.e., completeness and coherence) between two images and iteratively change the original image's size to retargeted image's. cho2008patch breaked an image to non-overlapping patches and retargeted image is constructed from the patches with patch domain'' constrains. barnes2009patchmatch proposed a fast randomized algorithm called PatchMatch to find dense NNF for patches between two images, and retargeted image can be obtained by the similar retargeting method in @cite_0 . wu2010resizing analyzed the translational symmetry'' widely existed in the real-world images, and summarize the image content based on symmetric lattices.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_21", "@cite_6" ], "mid": [ "2115273023", "", "2141155330", "1993120651" ], "abstract": [ "We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image video data into smaller sizes. A good ldquovisual summaryrdquo should satisfy two properties: (1) it should contain as much as possible visual information from the input data; (2) it should introduce as few as possible new visual artifacts that were not in the input data (i.e., preserve visual coherence). We propose a bi-directional similarity measure which quantitatively captures these two requirements: Two signals S and T are considered visually similar if all patches of S (at multiple scales) are contained in T, and vice versa. The problem of summarization re-targeting is posed as an optimization problem of this bi-directional similarity measure. We show summarization results for image and video data. We further show that the same approach can be used to address a variety of other problems, including automatic cropping, completion and synthesis of visual data, image collage, object removal, photo reshuffling and more.", "", "We introduce the patch transform, where an image is broken into non-overlapping patches, and modifications or constraints are applied in the ldquopatch domainrdquo. A modified image is then reconstructed from the patches, subject to those constraints. When no constraints are given, the reconstruction problem reduces to solving a jigsaw puzzle. Constraints the user may specify include the spatial locations of patches, the size of the output image, or the pool of patches from which an image is reconstructed. We define terms in a Markov network to specify a good image reconstruction from patches: neighboring patches must fit to form a plausible image, and each patch should be used only once. We find an approximate solution to the Markov network using loopy belief propagation, introducing an approximation to handle the combinatorially difficult patch exclusion constraint. The resulting image reconstructions show the original image, modified to respect the userpsilas changes. We apply the patch transform to various image editing tasks and show that the algorithm performs well on real world images.", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods." ] }
1811.07600
2901357117
Most often, chat-bots are built to solve the purpose of a search engine or a human assistant: Their primary goal is to provide information to the user or help them complete a task. However, these chat-bots are incapable of responding to unscripted queries like "Hi, what's up", "What's your favourite food". Human evaluation judgments show that 4 humans come to a consensus on the intent of a given query which is from chat domain only 77 of the time, thus making it evident how non-trivial this task is. In our work, we show why it is difficult to break the chitchat space into clearly defined intents. We propose a system to handle this task in chat-bots, keeping in mind scalability, interpretability, appropriateness, trustworthiness, relevance and coverage. Our work introduces a pipeline for query understanding in chitchat using hierarchical intents as well as a way to use seq-seq auto-generation models in professional bots. We explore an interpretable model for chat domain detection and also show how various components such as adult offensive classification, grammars regex patterns, curated personality based responses, generic guided evasive responses and response generation models can be combined in a scalable way to solve this problem.
Task and search intents in bots or personal assistants have been widely studied. The advantages of bots such as discoverability, availability and contextual understanding are often discussed in literature @cite_9 . Klopfenstein:2017:RBS:3064663.3064672 discuss bots that are being used as functional replacements of mobile applications and name them Botplications". They also point out the ease with which a bot can be trained, using platforms like IBM's Watson https: www.ibm.com watson for question-answering. However, they do not discuss the need for a platform to train bots that can understand usual chit-chat.
{ "cite_N": [ "@cite_9" ], "mid": [ "2622863697" ], "abstract": [ "This work documents the recent rise in popularity of messaging bots: chatterbot-like agents with simple, textual interfaces that allow users to access information, make use of services, or provide entertainment through online messaging platforms. Conversational interfaces have been often studied in their many facets, including natural language processing, artificial intelligence, human-computer interaction, and usability. In this work we analyze the recent trends in chatterbots and provide a survey of major messaging platforms, reviewing their support for bots and their distinguishing features. We then argue for what we call \"Botplication\", a bot interface paradigm that makes use of context, history, and structured conversation elements for input and output in order to provide a conversational user experience while overcoming the limitations of text-only interfaces." ] }
1811.07769
2901059565
More than half of the world's roads lack adequate street addressing systems. Lack of addresses is even more visible in daily lives of people in developing countries. We would like to object to the assumption that having an address is a luxury, by proposing a generative address design that maps the world in accordance with streets. The addressing scheme is designed considering several traditional street addressing methodologies employed in the urban development scenarios around the world. Our algorithm applies deep learning to extract roads from satellite images, converts the road pixel confidences into a road network, partitions the road network to find neighborhoods, and labels the regions, roads, and address units using graph- and proximity-based algorithms. We present our results on a sample US city, and several developing cities, compare travel times of users using current ad hoc and new complete addresses, and contrast our addressing solution to current industrial and open geocoding alternatives.
Automating the generation of maps is extensively studied in the urban procedural modeling world @cite_10 @cite_23 @cite_21 @cite_9 @cite_7 , creating detailed and structurally realistic models, but none of them are applicable as a synthesis method on real world. On the other hand, inverse procedural modeling approaches @cite_30 process real-world data for generative representations. We follow this last path and rely on satellite imagery as the input of our synthesis approach.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_9", "@cite_21", "@cite_23", "@cite_10" ], "mid": [ "2474846919", "2086227409", "2053430767", "2117741646", "2023148032", "2154604972" ], "abstract": [ "This course presents a collection of state-of-the-art approaches for modeling and editing of 3D models for virtual worlds, simulations, and entertainment, in addition to real-world applications. The first contribution of this course is a coherent review of inverse procedural modeling (IPM) (i.e., proceduralization of provided 3D content). We describe different formulations of the problem as well as solutions based on those formulations. We show that although the IPM framework seems under-constrained, the state-of-the-art solutions actually use simple analogies to convert the problem into a set of fundamental computer science problems, which are then solved by corresponding algorithms or optimizations. The second contribution includes a description and categorization of results and applications of the IPM frameworks. Moreover, a substantial part of the course is devoted to summarizing different domain IPM frameworks for practical content generation in modeling and animation.", "In modern urban areas, we often find a transportation network that follows a superimposed pattern. In this paper, we propose a novel method to generate a virtual traffic network based on (1) image-derived templates, and (2) a rule-based generating system. Using 2D images as input maps, various road maps with different patterns could be produced. This traffic network generating model adjusts itself intelligently in order to avoid restricted geographical areas or urban developments. The generative model follows closely directions of elevation and connects road ends in ways that allow various types of breakpoints.", "We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels.", "Modeling a city poses a number of problems to computer graphics. Every urban area has a transportation network that follows population and environmental influences, and often a superimposed pattern plan. The buildings appearances follow historical, aesthetic and statutory rules. To create a virtual city, a roadmap has to be designed and a large number of buildings need to be generated. We propose a system using a procedural approach based on L-systems to model cities. From various image maps given as input, such as land-water boundaries and population density, our system generates a system of highways and streets, divides the land into lots, and creates the appropriate geometry for the buildings on the respective allotments. For the creation of a city street map, L-systems have been extended with methods that allow the consideration of global goals and local constraints and reduce the complexity of the production rules. An L-system that generates geometry and a texturing system based on texture elements and procedural methods compose the buildings.", "We present a method for interactive procedural generation of parcels within the urban modeling pipeline. Our approach performs a partitioning of the interior of city blocks using user-specified subdivision attributes and style parameters. Moreover, our method is both robust and persistent in the sense of being able to map individual parcels from before an edit operation to after an edit operation – this enables transferring most, if not all, customizations despite small to large-scale interactive editing operations. The guidelines guarantee that the resulting subdivisions are functionally and geometrically plausible for subsequent building modeling and construction. Our results include visual and statistical comparisons that demonstrate how the parcel configurations created by our method can closely resemble those found in real-world cities of a large variety of styles. By directly addressing the block subdivision problem, we intend to increase the editability and realism of the urban modeling pipeline and to become a standard in parcel generation for future urban modeling methods. © 2012 Wiley Periodicals, Inc.", "This paper addresses the problem of interactively modeling large street networks. We introduce an intuitive and flexible modeling framework in which a user can create a street network from scratch or modify an existing street network. This is achieved through designing an underlying tensor field and editing the graph representing the street network. The framework is intuitive because it uses tensor fields to guide the generation of a street network. The framework is flexible because it allows the user to combine various global and local modeling operations such as brush strokes, smoothing, constraints, noise and rotation fields. Our results will show street networks and three-dimensional urban geometry of high visual quality." ] }
1811.07615
2964215259
Density-based clustering is the task of discovering high-density regions of entities (clusters) that are separated from each other by contiguous regions of low-density. DBSCAN is, arguably, the most popular density-based clustering algorithm. However, its cluster recovery capabilities depend on the combination of the two parameters. In this paper we present a new density-based clustering algorithm which uses reverse nearest neighbour (RNN) and has a single parameter. We also show that it is possible to estimate a good value for this parameter using a clustering validity index. The RNN queries enable our algorithm to estimate densities taking more than a single entity into account, and to recover clusters that are not well-separated or have different densities. Our experiments on synthetic and real-world data sets show our proposed algorithm outperforms DBSCAN and its recent variant ISDBSCAN.
It is often stated that density-based clustering algorithms are capable of recovering clusters of arbitrary shapes. This is a very tempting thought, which may lead to some disregarding the importance of selecting an appropriate distance or similarity measure. This measure is the key to produce homogeneous clusters as it defines homogeneity. Selecting a measure will have an impact on the actual clustering. Most likely the impact will not be as obvious as if one were to apply an algorithm such as @math -means @cite_1 (where the measure in use leads to a bias towards a particular cluster shape). However, the impact of this selection will still exist at a more local level. If this was not the case, DBSCAN would produce the same clustering regardless of the distance measure in place.
{ "cite_N": [ "@cite_1" ], "mid": [ "2127218421" ], "abstract": [ "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = S1, S2, * *, Sk is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special" ] }
1811.07615
2964215259
Density-based clustering is the task of discovering high-density regions of entities (clusters) that are separated from each other by contiguous regions of low-density. DBSCAN is, arguably, the most popular density-based clustering algorithm. However, its cluster recovery capabilities depend on the combination of the two parameters. In this paper we present a new density-based clustering algorithm which uses reverse nearest neighbour (RNN) and has a single parameter. We also show that it is possible to estimate a good value for this parameter using a clustering validity index. The RNN queries enable our algorithm to estimate densities taking more than a single entity into account, and to recover clusters that are not well-separated or have different densities. Our experiments on synthetic and real-world data sets show our proposed algorithm outperforms DBSCAN and its recent variant ISDBSCAN.
The ISDBSCAN outperforms the above and OPTICS. Probably, the major reason for this is the use of the @math -influence space ( @math ) to define the density around a particular entity. @math is based on the @math -nearest neighbour ( @math ) @cite_21 and reverse @math -nearest neighbour ( @math ) @cite_8 methods. where @math , and @math is the number of nearest neighbours. The reverse @math -nearest neighbours is given by the set
{ "cite_N": [ "@cite_21", "@cite_8" ], "mid": [ "1985258161", "2076287166" ], "abstract": [ "Abstract Nonparametric regression is a set of techniques for estimating a regression curve without making strong assumptions about the shape of the true regression function. These techniques are therefore useful for building and checking parametric models, as well as for data description. Kernel and nearest-neighbor regression estimators are local versions of univariate location estimators, and so they can readily be introduced to beginning students and consulting clients who are familiar with such summaries as the sample mean and median.", "Inherent in the operation of many decision support and continuous referral systems is the notion of the “influence” of a data point on the database. This notion arises in examples such as finding the set of customers affected by the opening of a new store outlet location, notifying the subset of subscribers to a digital library who will find a newly added document most relevant, etc. Standard approaches to determining the influence set of a data point involve range searching and nearest neighbor queries. In this paper, we formalize a novel notion of influence based on reverse neighbor queries and its variants. Since the nearest neighbor relation is not symmetric, the set of points that are closest to a query point (i.e., the nearest neighbors) differs from the set of points that have the query point as their nearest neighbor (called the reverse nearest neighbors). Influence sets based on reverse nearest neighbor (RNN) queries seem to capture the intuitive notion of influence from our motivating examples. We present a general approach for solving RNN queries and an efficient R-tree based method for large data sets, based on this approach. Although the RNN query appears to be natural, it has not been studied previously. RNN queries are of independent interest, and as such should be part of the suite of available queries for processing spatial and multimedia data. In our experiments with real geographical data, the proposed method appears to scale logarithmically, whereas straightforward sequential scan scales linearly. Our experimental study also shows that approaches based on range searching or nearest neighbors are ineffective at finding influence sets of our interest." ] }
1811.07533
2932274394
Variational dropout (VD) is a generalization of Gaussian dropout, which aims at inferring the posterior of network weights based on a log-uniform prior on them to learn these weights as well as dropout rate simultaneously. The log-uniform prior not only interprets the regularization capacity of Gaussian dropout in network training, but also underpins the inference of such posterior. However, the log-uniform prior is an improper prior (i.e., its integral is infinite) which causes the inference of posterior to be ill-posed, thus restricting the regularization performance of VD. To address this problem, we present a new generalization of Gaussian dropout, termed variational Bayesian dropout (VBD), which turns to exploit a hierarchical prior on the network weights and infer a new joint posterior. Specifically, we implement the hierarchical prior as a zero-mean Gaussian distribution with variance sampled from a uniform hyper-prior. Then, we incorporate such a prior into inferring the joint posterior over network weights and the variance in the hierarchical prior, with which both the network training and the dropout rate estimation can be cast into a joint optimization problem. More importantly, the hierarchical prior is a proper prior which enables the inference of posterior to be well-posed. In addition, we further show that the proposed VBD can be seamlessly applied to network compression. Experiments on both classification and network compression tasks demonstrate the superior performance of the proposed VBD in terms of regularizing network training.
. VD is a generalization of Gaussian dropout, which is able to interpret the regularization capacity of dropout as well as automatically estimating the dropout rate via inferring the posterior of network weights. For example, literature in @cite_10 proves that training network with variational dropout framework implicitly imposes the log-uniform prior on weights for preventing over-fitting. Since the dropout rate can be automatically determined, some literatures @cite_9 @cite_17 further apply VD to compress neural networks. However, the log-uniform prior is an improper prior which causes the inference of posterior over network weights in VD to be ill-posed @cite_18 @cite_17 , thus limiting its performance in preventing over-fitting. In this study, the proposed VBD imposes a proper hierarchical prior on network weights, which induces a well-posed Bayesian inference over network weights and thus improves the regularization capacity obviously.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_18", "@cite_17" ], "mid": [ "2582745083", "1826234144", "2767630563", "2963117513" ], "abstract": [ "We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.", "We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.", "Gaussian multiplicative noise is commonly used as a stochastic regularisation technique in training of deterministic neural networks. A recent paper reinterpreted the technique as a specific algorithm for approximate inference in Bayesian neural networks; several extensions ensued. We show that the log-uniform prior used in all the above publications does not generally induce a proper posterior, and thus Bayesian inference in such models is ill-posed. Independent of the log-uniform prior, the correlated weight noise approximation has further issues leading to either infinite objective or high risk of overfitting. The above implies that the reported sparsity of obtained solutions cannot be explained by Bayesian or the related minimum description length arguments. We thus study the objective from a non-Bayesian perspective, provide its previously unknown analytical form which allows exact gradient evaluation, and show that the later proposed additive reparametrisation introduces minima not present in the original multiplicative parametrisation. Implications and future research directions are discussed.", "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves gener- alization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural net- works and provides structured sparsity, e.g. removes neurons and or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is com- puted in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer." ] }
1811.07886
2901865147
Chemical structure elucidation is a serious bottleneck in analytical chemistry today. We address the problem of identifying an unknown chemical threat given its mass spectrum and its chemical formula, a task which might take well trained chemists several days to complete. Given a chemical formula, there could be over a million possible candidate structures. We take a data driven approach to rank these structures by using neural networks to predict the presence of substructures given the mass spectrum, and matching these substructures to the candidate structures. Empirically, we evaluate our approach on a data set of chemical agents built for unknown chemical threat identification. We show that our substructure classifiers can attain over 90 micro F1-score, and we can find the correct structure among the top 20 candidates in 88 and 71 of test cases for two compound classes.
Together with the burgeoning use of high-resolution, high-accuracy MS and tandem MS, alternative identification strategies have arisen that involve the use of online compound database (PubChem @cite_0 and Chemspider @cite_7 ) searching. These have spurred multiple analytical companies such as Thermo Fisher, Agilent Technologies, Waters and ACD Labs to develop commercial software, such as Mass Frontier (Thermo Fisher), Masshunter Profinder (Agilent), Progenesis Q1 (Waters) and ACD MS Workbook Suite (ACD Labs), that make use of these online databases to yield substructure information based on the mass spectrum of the unknown compound. With this information, the chemist can then postulate probable structures on which the software would perform an in-silico fragmentation and from the fragments generated, a match value to the experimental mass spectrum is calculated. However, not only is there a limited amount of spectral databases available for high-resolution and tandem mass spectrometry, the requirement of a well trained chemist to elucidate the unknown structure does not make the structural elucidation process any less of a challenge.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "1601495365", "1976892175" ], "abstract": [ "Abstract PubChem is an open repository for experimental data identifying the biological activities of small molecules. PubChem contents include more than: 1000 bioassays, 28 million bioassay test outcomes, 40 million substance contributed descriptions, and 19 million unique compound structures contributed from over 70 depositing organizations. PubChem provides a significant, publicly accessible platform for mining the biological information of small molecules.", "ChemSpider is a free, online chemical database offering access to physical and chemical properties, molecular structure, spectral data, synthetic methods, safety information, and nomenclature for almost 25 million unique chemical compounds sourced and linked to almost 400 separate data sources on the Web. ChemSpider is quickly becoming the primary chemistry Internet portal and it can be very useful for both chemical teaching and research." ] }
1906.10198
2953829678
Studies on emotion recognition (ER) show that combining lexical and acoustic information results in more robust and accurate models. The majority of the studies focus on settings where both modalities are available in training and evaluation. However, in practice, this is not always the case; getting ASR output may represent a bottleneck in a deployment pipeline due to computational complexity or privacy-related constraints. To address this challenge, we study the problem of efficiently combining acoustic and lexical modalities during training while still providing a deployable acoustic model that does not require lexical inputs. We first experiment with multimodal models and two attention mechanisms to assess the extent of the benefits that lexical information can provide. Then, we frame the task as a multi-view learning problem to induce semantic information from a multimodal model into our acoustic-only network using a contrastive loss function. Our multimodal model outperforms the previous state of the art on the USC-IEMOCAP dataset reported on lexical and acoustic information. Additionally, our multi-view-trained acoustic network significantly surpasses models that have been exclusively trained with acoustic features.
Recent work has focused on different ways to fuse the acoustic, lexical, and visual modalities. However, we narrow the discussion to the acoustic and lexical modalities to align with the scope of the paper. In most of the cases, researchers have used concatenation to fuse the lexical and acoustic representations at different stages of their models. Other works have proposed multimodal pooling fusion , tensor fusion networks , modality hierarchical fusion , context-aware fusion with attention , and conversational memory networks (CMN) @cite_0 . Nevertheless, all the previous fusion techniques have been made at the utterance level, whereas our work focuses on multimodal fusion at the word level by introducing acoustic word representations. We compare our work to because they document the current best performance on lexical and acoustic information on the IEMOCAP dataset using the standard 10-fold speaker-exclusive cross-validation setting.
{ "cite_N": [ "@cite_0" ], "mid": [ "2767461737" ], "abstract": [ "In this paper, we present an analysis of different multimodal fusion approaches in the context of deep learning, focusing on pooling intermediate representations learned for the acoustic and lexical modalities. Traditional approaches to multimodal feature pooling include: concatenation, element-wise addition, and element-wise multiplication. We compare these traditional methods to outer-product and compact bilinear pooling approaches, which consider more comprehensive interactions between features from the two modalities. We also study the influence of each modality on the overall performance of a multimodal system. Our experiments on the IEMOCAP dataset suggest that: (1) multimodal methods that combine acoustic and lexical features outperform their unimodal counterparts; (2) the lexical modality is better for predicting valence than the acoustic modality; (3) outer-product-based pooling strategies outperform other pooling strategies." ] }
1906.10104
2951448443
We propose an automated method to estimate a road segment's free-flow speed from overhead imagery and road metadata. The free-flow speed of a road segment is the average observed vehicle speed in ideal conditions, without congestion or adverse weather. Standard practice for estimating free-flow speeds depends on several road attributes, including grade, curve, and width of the right of way. Unfortunately, many of these fine-grained labels are not always readily available and are costly to manually annotate. To compensate, our model uses a small, easy to obtain subset of road features along with aerial imagery to directly estimate free-flow speed with a deep convolutional neural network (CNN). We evaluate our approach on a large dataset, and demonstrate that using imagery alone performs nearly as well as the road features and that the combination of imagery with road features leads to the highest accuracy.
Different studies have been proposed to estimate and map properties of the visual world using overhead images. Several authors have proposed different deep learning based approaches for vehicle detection @cite_7 @cite_9 and road extraction @cite_3 @cite_2 @cite_4 @cite_11 from aerial images. @cite_6 introduced an approach for mapping soundscapes of geographic regions using overhead imagery. @cite_10 proposed a model that is capable of predicting object histograms from overhead imagery. Several works have addressed the problem of speed estimation. @cite_0 introduced an approach to estimate the vehicle speed in traffic videos. Most similar to our work, @cite_1 proposed a model for road safety estimation based on the usRAP Star Rating Protocol. While this star rating is based on approximately 60 road safety features @cite_5 , their network works directly on ground level panorama images. We propose a new method that instead uses overhead imagery with the addition of auxiliary features.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "", "2613886079", "2612465689", "2964324313", "2774320778", "2900759294", "2900327972", "", "", "2885097192", "" ], "abstract": [ "", "Since the introduction of deep convolutional neural networks (CNNs), object detection in imagery has witnessed substantial breakthroughs in state-of-the-art performance. The defense community utilizes overhead image sensors that acquire large field-of-view aerial imagery in various bands of the electromagnetic spectrum, which is then exploited for various applications, including the detection and localization of man-made objects. In this work, we utilize a recent state-of-the art object detection algorithm, faster R-CNN, to train a deep CNN for vehicle detection in multimodal imagery. We utilize the vehicle detection in aerial imagery (VEDAI) dataset, which contains overhead imagery that is representative of an ISR setting. Our contribution includes modification of key parameters in the faster R-CNN algorithm for this setting where the objects of interest are spatially small, occupying less than 1:5×10-3 of the total image pixels. Our experiments show that (1) an appropriately trained deep CNN leads to average precision rates above 93 on vehicle detection, and (2) transfer learning between imagery modalities is possible, yielding average precision rates above 90 in the absence of fine-tuning.", "Vehicle detection in aerial images is a crucial image processing step for many applications like screening of large areas. In recent years, several deep learning based frameworks have been proposed for object detection. However, these detectors were developed for datasets that considerably differ from aerial images. In this paper, we systematically investigate the potential of Fast R-CNN and Faster R-CNN for aerial images, which achieve top performing results on common detection benchmark datasets. Therefore, the applicability of 8 state-of-the-art object proposals methods used to generate a set of candidate regions and of both detectors is examined. Relevant adaptations of the object proposals methods are provided. To overcome shortcomings of the original approach in case of handling small instances, we further propose our own network that clearly outperforms state-of-the-art methods for vehicle detection in aerial images. All experiments are performed on two publicly available datasets to account for differing characteristics such as ground sampling distance, number of objects per image and varying backgrounds.", "This paper addresses the task of road safety assessment. An emerging approach for conducting such assessments in the United States is through the US Road Assessment Program (usRAP), which rates roads from highest risk (1 star) to lowest (5 stars). Obtaining these ratings requires manual, fine-grained labeling of roadway features in streetlevel panoramas, a slow and costly process. We propose to automate this process using a deep convolutional neural network that directly estimates the star rating from a street-level panorama, requiring milliseconds per image at test time. Our network also estimates many other roadlevel attributes, including curvature, roadside hazards, and the type of median. To support this, we incorporate taskspecific attention layers so the network can focus on the panorama regions that are most useful for a particular task. We evaluated our approach on a large dataset of real-world images from two US states. We found that incorporating additional tasks, and using a semi-supervised training approach, significantly reduced overfitting problems, allowed us to optimize more layers of the network, and resulted in higher accuracy.", "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.", "We explore the problem of mapping soundscapes, that is, predicting the types of sounds that are likely to be heard at a given geographic location. Using a novel dataset, which includes geo-tagged audio and overhead imagery, we develop an approach for constructing an aural atlas, which captures the geospatial distribution of soundscapes. We build on previous work relating sound to ground-level imagery but incorporate overhead imagery to overcome the limitations of sparsely distributed geo-tagged audio. In the end, all that we require to construct an aural atlas is overhead imagery of the region of interest. We show examples of aural atlases at multiple spatial scales, from block-level to country.", "The rapid recent advancements in the computation ability of everyday computers have made it possible to widely apply deep learning methods to the analysis of traffic surveillance videos. Traffic flow prediction, anomaly detection, vehicle re-identification, and vehicle tracking are basic components in traffic analysis. Among these applications, traffic flow prediction, or vehicle speed estimation, is one of the most important research topics of recent years. Good solutions to this problem could prevent traffic collisions and help improve road planning by better estimating transit demand. In the 2018 NVIDIA AI City Challenge, we combine modern deep learning models with classic computer vision approaches to propose an efficient way to predict vehicle speed. In this paper, we introduce some state-of-the-art approaches in vehicle speed estimation, vehicle detection, and object tracking, as well as our solution for Track 1 of the Challenge.", "", "", "In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures semantically meaningful features, despite being trained without manual annotations.", "" ] }
1906.10124
2954444343
In recent years, reinforcement learning has been successful in solving video games from Atari to Star Craft II. However, the end-to-end model-free reinforcement learning (RL) is not sample efficient and requires a significant amount of computational resources to achieve superhuman level performance. Model-free RL is also unlikely to produce human-like agents for playtesting and gameplaying AI in the development cycle of complex video games. In this paper, we present a hierarchical approach to training agents with the goal of achieving human-like style and high skill level in team sports games. While this is still work in progress, our preliminary results show that the presented approach holds promise for solving the posed multi-agent learning problem.
Our problem naturally lends itself to the multi-agent learning (MAL) framework. In such a framework, iteratively optimizing for a policy could suffer from non-convergence due to the breakdown of the stationarity of the decision process and partial observability of the state space @cite_16 @cite_9 . This is because the environment for each of the agents would change whenever any other agent updates their policy, and hence independent reinforcement learning agents do not work well in practice @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_16" ], "mid": [ "2096145798", "2122763142", "1542941925" ], "abstract": [ "In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms' strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications.", "In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent's limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and learn an effective policy.", "In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic." ] }
1906.10124
2954444343
In recent years, reinforcement learning has been successful in solving video games from Atari to Star Craft II. However, the end-to-end model-free reinforcement learning (RL) is not sample efficient and requires a significant amount of computational resources to achieve superhuman level performance. Model-free RL is also unlikely to produce human-like agents for playtesting and gameplaying AI in the development cycle of complex video games. In this paper, we present a hierarchical approach to training agents with the goal of achieving human-like style and high skill level in team sports games. While this is still work in progress, our preliminary results show that the presented approach holds promise for solving the posed multi-agent learning problem.
More recently, @cite_46 proposed an actor-critic algorithm with a centralized critic during training and a decentralized actor at training and inference. @cite_2 compare policy gradient, temporal-difference error, and actor-critic methods on cooperative deep multi-agent reinforcement learning (MARL). See @cite_40 @cite_39 for recent surveys on MAL and deep MARL advancements.
{ "cite_N": [ "@cite_40", "@cite_46", "@cite_39", "@cite_2" ], "mid": [ "2740377041", "2963407617", "2895865957", "2768629321" ], "abstract": [ "The key challenge in multiagent learning is learning a best response to the behaviour of other agents, which may be non-stationary: if the other agents adapt their strategy as well, the learning target moves. Disparate streams of research have approached non-stationarity from several angles, which make a variety of implicit assumptions that make it hard to keep an overview of the state of the art and to validate the innovation and significance of new works. This survey presents a coherent overview of work that addresses opponent-induced non-stationarity with tools from game theory, reinforcement learning and multi-armed bandits. Further, we reflect on the principle approaches how algorithms model and cope with this non-stationarity, arriving at a new framework and five categories (in increasing order of sophistication): ignore, forget, respond to target models, learn models, and theory of mind. A wide range of state-of-the-art algorithms is classified into a taxonomy, using these categories and key characteristics of the environment (e.g., observability) and adaptation behaviour of the opponents (e.g., smooth, abrupt). To clarify even further we present illustrative variations of one domain, contrasting the strengths and limitations of each category. Finally, we discuss in which environments the different approaches yield most merit, and point to promising avenues of future research.", "We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.", "Deep reinforcement learning (RL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent learning (MAL) scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. The primary goal of this article is to provide a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Additionally, we complement the overview with a broader analysis: (i) we revisit previous key components, originally presented in MAL and RL, and highlight how they have been adapted to multiagent deep reinforcement learning settings. (ii) We provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research. (iii) We take a more critical tone raising practical challenges of MDRL (e.g., implementation and computational demands). We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists (e.g., RL and MAL) in a joint effort to promote fruitful research in the multiagent community.", "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multi-agent domains." ] }
1906.10124
2954444343
In recent years, reinforcement learning has been successful in solving video games from Atari to Star Craft II. However, the end-to-end model-free reinforcement learning (RL) is not sample efficient and requires a significant amount of computational resources to achieve superhuman level performance. Model-free RL is also unlikely to produce human-like agents for playtesting and gameplaying AI in the development cycle of complex video games. In this paper, we present a hierarchical approach to training agents with the goal of achieving human-like style and high skill level in team sports games. While this is still work in progress, our preliminary results show that the presented approach holds promise for solving the posed multi-agent learning problem.
It is also possible to use demonstrations to guide RL. @cite_25 train off-policy RL using demonstrations. @cite_55 use behavioral cloning to initialize value and policy networks that would solve Go, and @cite_34 is built on the same thought process. @cite_50 @cite_38 use demonstrations in the replay buffer to guide the policy to a better local optimum. @cite_12 @cite_29 shape the reward function to promote actions that mimic the demonstrator. @cite_52 use demonstrations to teach the policy to avoid catastrophic events in the game of Pommerman where model-free RL fails.
{ "cite_N": [ "@cite_38", "@cite_55", "@cite_29", "@cite_52", "@cite_50", "@cite_34", "@cite_25", "@cite_12" ], "mid": [ "", "2145339207", "", "2936181521", "2741122588", "", "2104733512", "2757631751" ], "abstract": [ "", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "", "Safe reinforcement learning has many variants and it is still an open research problem. Here, we focus on how to use action guidance by means of a non-expert demonstrator to avoid catastrophic events in a domain with sparse, delayed, and deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. This domain is very challenging for reinforcement learning (RL) --- past work has shown that model-free RL algorithms fail to achieve significant learning. In this paper, we shed light into the reasons behind this failure by exemplifying and analyzing the high rate of catastrophic events (i.e., suicides) that happen under random exploration in this domain. While model-free random exploration is typically futile, we propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with small number of rollouts, can be integrated to asynchronous distributed deep reinforcement learning methods. Compared to vanilla deep RL algorithms, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.", "We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object.", "", "Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.", "Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Thus, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, and enable learning within the equivalent of a few hours of robot experience. We demonstrate successful policies for multiple complex tasks: object relocation, in-hand manipulation, tool use, and door opening." ] }
1906.10124
2954444343
In recent years, reinforcement learning has been successful in solving video games from Atari to Star Craft II. However, the end-to-end model-free reinforcement learning (RL) is not sample efficient and requires a significant amount of computational resources to achieve superhuman level performance. Model-free RL is also unlikely to produce human-like agents for playtesting and gameplaying AI in the development cycle of complex video games. In this paper, we present a hierarchical approach to training agents with the goal of achieving human-like style and high skill level in team sports games. While this is still work in progress, our preliminary results show that the presented approach holds promise for solving the posed multi-agent learning problem.
To manage the complexity of the posed problem (see ), our solution involves a hierarchical approach. @cite_22 consider a hierarchical approach where the underlying low level actions are learned via RL whereas the high-level goals are picked up via IL from human demonstrations. This is in contrast to the hierarchical approach that we consider in this paper where we use IL at the low-level to achieve human-like behavior. @cite_41 break down the complexity of the StarCraft learning environment @cite_11 by breaking down the problem to a hierarchy of simpler learning tasks. @cite_17 apply a planning layer on top of RL where they infer the abstractions from the data as well. Finally, @cite_28 consider a bi-level neural network architecture where at the top level the Manager sets goals at a low temporal resolution, and at the low level the Worker produces primitive actions conditioned on the high-level goals at a high temporal resolution. More recently, @cite_36 provide a hierarchical generative model for achieving human gameplay using weak supervision.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_41", "@cite_28", "@cite_36", "@cite_17" ], "mid": [ "2749807327", "2964250417", "2951585678", "2949267040", "2910285800", "" ], "abstract": [ "This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning, representing a more difficult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward specification for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of StarCraft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make significant progress. Thus, SC2LE offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures.", "We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.", "StarCraft II poses a grand challenge for reinforcement learning. The main difficulties of it include huge state and action space and a long-time horizon. In this paper, we investigate a hierarchical reinforcement learning approach for StarCraft II. The hierarchy involves two levels of abstraction. One is the macro-action automatically extracted from expert's trajectories, which reduces the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture which is modular and easy to scale, enabling a curriculum transferring from simpler tasks to more complex tasks. The reinforcement training algorithm for this architecture is also investigated. On a 64x64 map and using restrictive units, we achieve a winning rate of more than 99 against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93 winning rate of Protoss against the most difficult non-cheating built-in AI (level-7) of Terran, training within two days using a single machine with only 48 CPU cores and 8 K40 GPUs. It also shows strong generalization performance, when tested against never seen opponents including cheating levels built-in AI and all levels of Zerg and Protoss built-in AI. We hope this study could shed some light on the future research of large-scale reinforcement learning.", "We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels -- allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits -- in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment.", "We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulatable way. We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.", "" ] }
1906.10124
2954444343
In recent years, reinforcement learning has been successful in solving video games from Atari to Star Craft II. However, the end-to-end model-free reinforcement learning (RL) is not sample efficient and requires a significant amount of computational resources to achieve superhuman level performance. Model-free RL is also unlikely to produce human-like agents for playtesting and gameplaying AI in the development cycle of complex video games. In this paper, we present a hierarchical approach to training agents with the goal of achieving human-like style and high skill level in team sports games. While this is still work in progress, our preliminary results show that the presented approach holds promise for solving the posed multi-agent learning problem.
The human-robot interaction problem shares many similarities with the problem at hand @cite_26 . However, training agents in video games is simpler in many ways. First, the agents can execute their policies centrally and there is no need for decentralized execution. Second, extracting semantic information from sensory signals such as processing images videos and text-to-speech conversion is not needed as all of the semantic information is available from the game engine. On the other hand, many of the sample efficient learning techniques designed for training robots are applicable to training agents in team sports video games as well @cite_20 .
{ "cite_N": [ "@cite_26", "@cite_20" ], "mid": [ "2339027962", "2915273306" ], "abstract": [ "Objective:The current status of human–robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described.Background:Robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control.Methods:This mini-review describes HRI developments in four application areas and what are the challenges for human factors research.Results:In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control.Conclusions:HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate co...", "This study presents a learning-by-imitation technique that learns social robot interaction behaviors from natural human- human interaction data and requires minimum input from a designer. To solve the problem of responding to ambiguous human actions, a novel topic clustering algorithm based on action cooccurrence frequencies is introduced. The system learns human-readable rules that dictate which action the robot should take, based on the most recent human action and the current estimated topic of conversation. The technique is demonstrated in a scenario where the robot learns to play the role of a travel agent. The proposed technique outperformed several baseline techniques in qualitative and quantitative evaluations. It responded more accurately to ambiguous questions and participants found it was easier to understand, provided more information, and required less effort to interact with." ] }
1906.10112
2950419363
We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting visual definitions" surface image properties (like object size") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at this http URL.
Generative Adversarial Networks or GANs . GANs @cite_29 introduced a revolutionary framework to synthesize natural-looking images @cite_8 @cite_28 @cite_16 @cite_17 @cite_28 . Among the many applications for GANs are style transfer @cite_19 , visual prediction @cite_9 , and sim2real" domain adaptation @cite_13 . Here, we show how they can also be applied to the problem of understanding high-level, cognitive image properties, such as memorability.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_29", "@cite_9", "@cite_19", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2904367110", "2952716587", "", "2248556341", "2962793481", "2950893734", "2758237641", "2766527293" ], "abstract": [ "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple \"truncation trick,\" allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.", "", "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset." ] }
1906.10112
2950419363
We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting visual definitions" surface image properties (like object size") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at this http URL.
Understanding CNN representations The internal representations of a CNN can be unveiled using methods like network dissection @cite_2 @cite_12 @cite_25 including for a CNN trained on memorability @cite_21 . For instance, @cite_21 showed that units with strong positive correlations with memorable images specialized for people, faces, body parts, etc., while those with strong negative correlations where more sensitive to large regions in landscapes scenes. Here, our framework introduces a new way of defining what memorability, and aesthetic, variability look like.
{ "cite_N": [ "@cite_21", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "2219771564", "2963996492", "2963749936", "2963081790" ], "abstract": [ "Progress in estimating visual memorability has been limited by the small scale and lack of variety of benchmark data. Here, we introduce a novel experimental procedure to objectively measure human memory, allowing us to build LaMem, the largest annotated image memorability dataset to date (containing 60,000 images from diverse sources). Using Convolutional Neural Networks (CNNs), we show that fine-tuned deep features outperform all other features by a large margin, reaching a rank correlation of 0.64, near human consistency (0.68). Analysis of the responses of the high-level CNN layers shows which objects and regions are positively, and negatively, correlated with memorability, allowing us to create memorability maps for each image and provide a concrete method to perform image memorability manipulation. This work demonstrates that one can now robustly estimate the memorability of images from many different classes, positioning memorability and deep memorability features as prime candidates to estimate the utility of information for cognitive systems. Our model and data are available at: http: memorability.csail.mit.edu.", "Abstract: With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn." ] }
1906.10112
2950419363
We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting visual definitions" surface image properties (like object size") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at this http URL.
Modifying Memorability . The memorability of an image, like faces, can be manipulated using warping techniques @cite_23 . Concurrent work has also explored using a GAN for this purpose @cite_4 . Another approach is a deep style transfer @cite_32 which taps into more artistic qualities. Now that GANs have reached a quality that is often almost indistinguishable from real images, they offer a powerful tool to synthesize images with different cognitive qualities. As shown here, our GANalyze framework successfully modified GAN-generated images across a wide range of image categories to produce a second generation of GAN realistic photos with different mnemonic qualities.
{ "cite_N": [ "@cite_4", "@cite_32", "@cite_23" ], "mid": [ "2900446162", "2605875579", "2163078148" ], "abstract": [ "Memorability is considered to be an important characteristic of visual content, whereas for advertisement and educational purposes it is often crucial. Despite numerous studies on understanding and predicting image memorability, there are almost no achievements in memorability modification. In this work, we study two approaches to image editing - GAN and classical image processing - and show their impact on memorability. The visual features which influence memorability directly stay unknown till now, hence it is impossible to control it manually. As a solution, we let GAN learn it deeply using labeled data, and then use it for conditional generation of new images. By analogy with algorithms which edit facial attributes, we consider memorability as yet another attribute and operate with it in the same way. Obtained data is also interesting for analysis, simply because there are no real-world examples of successful change of image memorability while preserving its other attributes. We believe this may give many new answers to the question \"what makes an image memorable?\" Apart from that we also study the influence of conventional photo-editing tools (Photoshop, Instagram, etc.) used daily by a wide audience on memorability. In this case, we start from real practical methods and study it using statistics and recent advances in memorability prediction. Photographers, designers, and advertisers will benefit from the results of this study directly.", "Recent works have shown that it is possible to automatically predict intrinsic image properties like memorability. In this paper, we take a step forward addressing the question: \"Can we make an image more memorable?\". Methods for automatically increasing image memorability would have an impact in many application fields like education, gaming or advertising. Our work is inspired by the popular editing-by-applying-filters paradigm adopted in photo editing applications, like Instagram and Prisma. In this context, the problem of increasing image memorability maps to that of retrieving memorabilizing'' filters or style seeds''. Still, users generally have to go through most of the available filters before finding the desired solution, thus turning the editing process into a resource and time consuming task. In this work, we show that it is possible to automatically retrieve the best style seeds for a given image, thus remarkably reducing the number of human attempts needed to find a good match. Our approach leverages from recent advances in the field of image synthesis and adopts a deep architecture for generating a memorable picture from a given input image and a style seed. Importantly, to automatically select the best style a novel learning-based solution, also relying on deep models, is proposed. Our experimental evaluation, conducted on publicly available benchmarks, demonstrates the effectiveness of the proposed approach for generating memorable images through automatic style seed selection.", "Contemporary life bombards us with many new images of faces every day, which poses non-trivial constraints on human memory. The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. Can we make a portrait more memorable or more forgettable automatically? Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e.g. age, attractiveness, and emotional magnitude) of the individual fixed. We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74 . Quantifying and modifying the 'memorability' of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements." ] }
1906.10182
2955874556
In this paper, we introduce a novel framework that can learn to make visual predictions about the motion of a robotic agent from raw video frames. Our proposed motion prediction network (PROM-Net) can learn in a completely unsupervised manner and efficiently predict up to 10 frames in the future. Moreover, unlike any other motion prediction models, it is lightweight and once trained it can be easily implemented on mobile platforms that have very limited computing capabilities. We have created a new robotic data set comprising LEGO Mindstorms moving along various trajectories in three different environments under different lighting conditions for testing and training the network. Finally, we introduce a framework that would use the predicted frames from the network as an input to a model predictive controller for motion planning in unknown dynamic environments with moving obstacles.
While considerable progress has been made in DRL @cite_20 , @cite_5 that learns meaningful skills directly from high dimensional raw sensory data (especially images), most of these are restricted to simulated applications of computer games. Only a few works like @cite_3 , @cite_18 talks about the application of a model based RL algorithm for robotic manipulation tasks using visual foresight. To the best of our knowledge, there is no existing work that addresses the problem of an end to end motion planning for autonomous mobile agents using visual prediction from a first person (robot) perspective.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_3", "@cite_20" ], "mid": [ "2260756217", "2902125520", "2528489519", "1757796397" ], "abstract": [ "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Deep reinforcement learning (RL) algorithms can learn complex robotic skills from raw sensory inputs, but have yet to achieve the kind of broad generalization and applicability demonstrated by deep learning methods in supervised domains. We present a deep RL method that is practical for real-world robotics tasks, such as robotic manipulation, and generalizes effectively to never-before-seen tasks and objects. In these settings, ground truth reward signals are typically unavailable, and we therefore propose a self-supervised model-based approach, where a predictive model learns to directly predict the future from raw sensory readings, such as camera images. At test time, we explore three distinct goal specification methods: designated pixels, where a user specifies desired object manipulation tasks by selecting particular pixels in an image and corresponding goal positions, goal images, where the desired goal state is specified with an image, and image classifiers, which define spaces of goal states. Our deep predictive models are trained using data collected autonomously and continuously by a robot interacting with hundreds of objects, without human supervision. We demonstrate that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model.", "A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them." ] }
1906.10197
2954931071
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to early-phase learning in several standard tasks: machine translation and object recognition. There is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.
Children utilize a variety of inductive biases like mutual exclusivity when learning the meaning of words @cite_0 . Previous work comparing children and neural networks has focused on the shape bias -- an assumption that objects with the same name tend to have the same shape, as opposed to color or texture @cite_14 . Children acquire a shape bias over the course of language development @cite_1 , and neural networks can do so too, as shown in synthetic learning scenarios @cite_15 @cite_19 and large-scale object recognition tasks @cite_28 (see also @cite_16 and @cite_7 for alternative findings). This bias is related to how quickly children learn the meaning of new words @cite_1 , and recent findings also show that guiding neural networks towards the shape bias improves their performance @cite_20 . In this work, we take initial steps towards a similar investigation of the ME bias in neural networks. Compared to the shape bias, ME has broader implications for machine learning systems; as we show in our analyses, the bias is relevant beyond object recognition.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_28", "@cite_1", "@cite_0", "@cite_19", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "1982374456", "2963324947", "2963488527", "2125500543", "1732736211", "2963660459", "2035122010", "2903867357", "2902617128" ], "abstract": [ "Abstract We ask if certain dimensions of perceptual similarity are weighted more heavily than others in determining word extension. The specific dimensions examined were shape, size, and texture. In four experiments, subjects were asked either to extend a novel count noun to new instances or, in a nonword classification task, to put together objects that go together. The subjects were 2-year-olds, 3-year-olds, and adults. The results of all four experiments indicate that 2- and 3-year-olds and adults all weight shape more heavily than they do size or texture. This observed emphasis on shape, however, depends on the age of the subject and the task. First, there is a developmental trend. The shape bias increases in strength and generality from 2 to 3 years of age and more markedly from early childhood to adulthood. Second, in young children, the shape bias is much stronger in word extension than in nonword classification tasks. These results suggest that the development of the shape bias originates in language learning—it reflects a fact about language—and does not stem from general perceptual processes.", "", "Deep neural networks (DNNs) have advanced performance on a wide range of complex tasks, rapidly outpacing our understanding of the nature of their solutions. While past work sought to advance our understanding of these models, none has made use of the rich history of problem descriptions, theories, and experimental methods developed by cognitive psychologists to study the human mind. To explore the potential value of these tools, we chose a well-established analysis from developmental psychology that explains how children learn word labels for objects, and applied that analysis to DNNs. Using datasets of stimuli inspired by the original cognitive psychology experiments, we find that state-of-the-art one shot learning models trained on ImageNet exhibit a similar bias to that observed in humans: they prefer to categorize objects according to shape rather than color. The magnitude of this shape bias varies greatly among architecturally identical, but differently seeded models, and even fluctuates within seeds throughout training, despite nearly equivalent classification performance. These results demonstrate the capability of tools from cognitive psychology for exposing hidden computational properties of DNNs, while concurrently providing us with a computational model for human word learning.", "By the age of 3, children easily learn to name new objects, extending new names for unfamiliar objects by similarity in shape. Two experiments tested the proposal that experience in learning object names tunes children's attention to the properties relevant for nam- ing—in the present case, to the property of shape—and thus facilitates the learning of more object names. In Experiment 1, a 9-week longitu- dinal study, 17-month-old children who repeatedly played with and heard names for members of unfamiliar object categories well orga- nized by shape formed the generalization that only objects with simi- lar shapes have the same name. Trained children also showed a dramatic increase in acquisition of new object names outside of the laboratory during the course of the study. Experiment 2 replicated these findings and showed that they depended on children's learning both a coherent category structure and object names. Thus, children who learn spe- cific names for specific things in categories with a common organizing property—in this case, shape—also learn to attend to just the right property—in this case, shape—for learning more object names.", "How do children learn that the word \"dog\" refers not to all four-legged animals, and not just to Ralph, but to all members of a particular species? How do they learn the meanings of verbs like \"think,\" adjectives like \"good,\" and words for abstract entities such as \"mortgage\" and \"story\"? The acquisition of word meaning is one of the fundamental issues in the study of mind. According to Paul Bloom, children learn words through sophisticated cognitive abilities that exist for other purposes. These include the ability to infer others' intentions, the ability to acquire concepts, an appreciation of syntactic structure, and certain general learning and memory abilities. Although other researchers have associated word learning with some of these capacities, Bloom is the first to show how a complete explanation requires all of them. The acquisition of even simple nouns requires rich conceptual, social, and linguistic capacities interacting in complex ways. This book requires no background in psychology or linguistics and is written in a clear, engaging style. Topics include the effects of language on spatial reasoning, the origin of essentialist beliefs, and the young child's understanding of representational art. The book should appeal to general readers interested in language and cognition as well as to researchers in the field.", "", "In the novel noun generalization task, 2 1 2-year-old children display generalized expectations about how solid and nonsolid things are named, extending names for never-before-encountered solids by shape and for never-before-encountered nonsolids by material. This distinction between solids and nonsolids has been interpreted in terms of an ontological distinction between objects and substances. Nine simulations and behavioral experiments tested the hypothesis that these expectations arise from the correlations characterizing early learned noun categories. In the simulation studies, connectionist networks were trained on noun vocabularies modeled after those of children. These networks formed generalized expectations about solids and nonsolids that match children's performances in the novel noun generalization task in the very different languages of English and Japanese. The simulations also generate new predictions supported by new experiments with children. Implications are discussed in terms of children's development of distinctions between kinds of categories and in terms of the nature of this knowledge.", "Deep convolutional networks (DCNNs) are achieving previously unseen performance in object classification, raising questions about whether DCNNs operate similarly to human vision. In biological vision, shape is arguably the most important cue for recognition. We tested the role of shape information in DCNNs trained to recognize objects. In Experiment 1, we presented a trained DCNN with object silhouettes that preserved overall shape but were filled with surface texture taken from other objects. Shape cues appeared to play some role in the classification of artifacts, but little or none for animals. In Experiments 2–4, DCNNs showed no ability to classify glass figurines or outlines but correctly classified some silhouettes. Aspects of these results led us to hypothesize that DCNNs do not distinguish object’s bounding contours from other edges, and that DCNNs access some local shape features, but not global shape. In Experiment 5, we tested this hypothesis with displays that preserved local features but disrupted global shape, and vice versa. With disrupted global shape, which reduced human accuracy to 28 , DCNNs gave the same classification labels as with ordinary shapes. Conversely, local contour changes eliminated accurate DCNN classification but caused no difficulty for human observers. These results provide evidence that DCNNs have access to some local shape information in the form of local edge relations, but they have no access to global object shapes.", "" ] }
1906.10197
2954931071
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to early-phase learning in several standard tasks: machine translation and object recognition. There is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.
Closer to the present research, a recent study @cite_25 analyzed a ME-like effect in neural machine translation systems at the sentence level, rather than the word level considered in developmental studies and our analyses here. Cohn-Gordon and Goodman @cite_25 showed that neural machine translation systems often learn many-to-one sentence mappings that result in meaning loss, such that two different sentences (meanings) in the source language are mapped to the same sentence (meaning) in the target language. Using a trained network, they show how a probabilistic pragmatics model @cite_2 can be used as a post-processor to preserve meaning and encourage one-to-one mappings. These sentence-level biases do not necessarily indicate how models behave at the word level, and we are interested in the role of ME during learning rather than as a post-processing step. Nevertheless, Cohn-Gordon and Goodman's results are important and encouraging, raising the possibility that ME could aid in training deep learning systems.
{ "cite_N": [ "@cite_25", "@cite_2" ], "mid": [ "2949702208", "1993979041" ], "abstract": [ "A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language. However, state-of-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the \"meaning space\" in different ways. For instance, \"I cut my finger.\" and \"I cut my finger off.\" describe different states of the world but are translated to French (by both Fairseq and Google Translate) as \"Je me suis coupe le doigt.\", which is ambiguous as to whether the finger is detached. More generally, translation systems are typically many-to-one (non-injective) functions from source to target language, which in many cases results in important distinctions in meaning being lost in translation. Building on Bayesian models of informative utterance production, we present a method to define a less ambiguous translation system in terms of an underlying pre-trained neural sequence-to-sequence model. This method increases injectivity, resulting in greater preservation of meaning as measured by improvement in cycle-consistency, without impeding translation quality (measured by BLEU score).", "One of the most astonishing features of human language is its capacity to convey information efficiently in context. Many theories provide informal accounts of communicative inference, yet there have been few successes in making precise, quantitative predictions about pragmatic reasoning. We examined judgments about simple referential communication games, modeling behavior in these games by assuming that speakers attempt to be informative and that listeners use Bayesian inference to recover speakers’ intended referents. Our model provides a close, parameter-free fit to human judgments, suggesting that the use of information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication." ] }
1906.10187
2956032547
When deploying autonomous agents in the real world, we need effective ways of communicating objectives to them. Traditional skill learning has revolved around reinforcement and imitation learning, each with rigid constraints on the format of information exchanged between the human and the agent. While scalar rewards carry little information, demonstrations require significant effort to provide and may carry more information than is necessary. Furthermore, rewards and demonstrations are often defined and collected before training begins, when the human is most uncertain about what information would help the agent. In contrast, when humans communicate objectives with each other, they make use of a large vocabulary of informative behaviors, including non-verbal communication, and often communicate throughout learning, responding to observed behavior. In this way, humans communicate intent with minimal effort. In this paper, we propose such interactive learning as an alternative to reward or demonstration-driven learning. To accomplish this, we introduce a multi-agent training framework that enables an agent to learn from another agent who knows the current task. Through a series of experiments, we demonstrate the emergence of a variety of interactive learning behaviors, including information-sharing, information-seeking, and question-answering. Most importantly, we find that our approach produces an agent that is capable of learning interactively from a human user, without a set of explicit demonstrations or a reward function, and achieving significantly better performance cooperatively with a human than a human performing the task alone.
Our work builds upon the idea of meta-learning, or learning to learn . Meta-learning for control has been considered in the context of reinforcement learning and imitation learning . Our problem setting differs from these, as the agent is learning by observing and interacting with another agent, as opposed to using reinforcement or imitation learning. In particular, our method builds upon recurrence-based meta-learning approaches in the context of the multi-agent task setting. When a broader range of interactive behaviors is desired, prior works have introduced a multi-agent learning component . These methods are more closely related to ours in that, during training, they also maximize a joint reward function between the agents and emerge cooperative behavior . Multiple works @cite_10 @cite_7 emerge cooperative behavior but in task domains that do not require knowledge transfer between the agents, while others @cite_1 @cite_4 @cite_12 @cite_8 @cite_2 all emerge communication over a communication channel. Such communication is known to be difficult to interpret , without post-inspection or a method for translation . Critically, none of these prior works conduct user experiments to evaluate transfer to humans.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "", "2962938168", "2950472486", "2963000099", "2906586541", "2768629321", "2108892923" ], "abstract": [ "", "", "The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are interested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communication. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver. The receiver must rely on this message to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the \"word meanings\" induced in the game to better reflect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents' code into natural language. Both of these are necessary steps towards developing machines that are able to communicate with humans productively.", "We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.", "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.", "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multi-agent domains.", "Multi-agent systems (MAS) are a field of study of growing interest in a variety of domains such as robotics or distributed controls. The article focuses on decentralized reinforcement learning (RL) in cooperative MAS, where a team of independent learning robots (IL) try to coordinate their individual behavior to reach a coherent joint behavior. We assume that each robot has no information about its teammates' actions. To date, RL approaches for such ILs did not guarantee convergence to the optimal joint policy in scenarios where the coordination is difficult. We report an investigation of existing algorithms for the learning of coordination in cooperative MAS, and suggest a Q-learning extension for ILs, called hysteretic Q-learning. This algorithm does not require any additional communication between robots. Its advantages are showing off and compared to other methods on various applications: bi-matrix games, collaborative ball balancing task and pursuit domain." ] }
1906.10047
2950357695
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language---representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid.
Bound analysis, in the sense of finding symbolic bounds for data values, iteration bounds and related quantities, is a classic field of program analysis @cite_27 @cite_10 @cite_12 . It is also an area of active research, with tools being currently (or recently) developed including COSTA @cite_11 , AProVE @cite_0 , CiaoPP @cite_18 , @math @cite_8 , Loopus @cite_1 ---and this is just a sample of tools for imperative programs. There is also work on functional and logic programs, term rewriting systems, recurrence relations, etc. that we cannot attempt to survey here. In the rest of this section we point out work that is more directly related to ours, and has even inspired it.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_1", "@cite_0", "@cite_27", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2791777217", "2058033966", "2577171054", "2533836568", "2003295303", "", "2060213695", "2094712524" ], "abstract": [ "Many applications require conformance with specifications that constrain the use of resources, such as execution time, energy, bandwidth, etc. We have presented a configurable framework for static resource usage verification where specifications can include lower and upper bound, data size-dependent resource usage functions. To statically check such specifications, our framework infers the same type of resource usage functions, which safely approximate the actual resource usage of the program, and compares them against the specification. We review how this framework supports several languages and compilation output formats by translating them to an intermediate representation based on Horn clauses and using the configurability of the framework to describe the resource semantics of the input language. We provide a more detailed formalization and extend the framework so that both resource usage specification and analysis verification output can include preconditions expressing intervals for the input data sizes for which assertions are applicable, proved, or disproved. Most importantly, we also extend the classes of functions that can be checked. We provide results from an implementation within the Ciao CiaoPP framework, and report on a tool built by instantiating this framework for the verification of energy consumption specifications for imperative embedded programs. This paper is under consideration for publication in Theory and Practice of Logic Programming (TPLP).", "This paper presents a new approach for automatically deriving worst-case resource bounds for C programs. The described technique combines ideas from amortized analysis and abstract interpretation in a unified framework to address four challenges for state-of-the-art techniques: compositionality, user interaction, generation of proof certificates, and scalability. Compositionality is achieved by incorporating the potential method of amortized analysis. It enables the derivation of global whole-program bounds with local derivation rules by naturally tracking size changes of variables in sequenced loops and function calls. The resource consumption of functions is described abstractly and a function call can be analyzed without access to the function body. User interaction is supported with a new mechanism that clearly separates qualitative and quantitative verification. A user can guide the analysis to derive complex non-linear bounds by using auxiliary variables and assertions. The assertions are separately proved using established qualitative techniques such as abstract interpretation or Hoare logic. Proof certificates are automatically generated from the local derivation rules. A soundness proof of the derivation system with respect to a formal cost semantics guarantees the validity of the certificates. Scalability is attained by an efficient reduction of bound inference to a linear optimization problem that can be solved by off-the-shelf LP solvers. The analysis framework is implemented in the publicly-available tool C4B. An experimental evaluation demonstrates the advantages of the new technique with a comparison of C4B with existing tools on challenging micro benchmarks and the analysis of more than 2900 lines of C code from the cBench benchmark suite.", "Difference constraints have been used for termination analysis in the literature, where they denote relational inequalities of the form @math xź≤y+c, and describe that the value of x in the current state is at most the value of y in the previous state plus some constant @math cźZ. We believe that difference constraints are also a good choice for complexity and resource bound analysis because the complexity of imperative programs typically arises from counter increments and resets, which can be modeled naturally by difference constraints. In this article we propose a bound analysis based on difference constraints. We make the following contributions: (1) our analysis handles bound analysis problems of high practical relevance which current approaches cannot handle: we extend the range of bound analysis to a class of challenging but natural loop iteration patterns which typically appear in parsing and string-matching routines. (2) We advocate the idea of using bound analysis to infer invariants: our soundness proven algorithm obtains invariants through bound analysis, the inferred invariants are in turn used for obtaining bounds. Our bound analysis therefore does not rely on external techniques for invariant generation. (3) We demonstrate that difference constraints are a suitable abstract program model for automatic complexity and resource bound analysis: we provide efficient abstraction techniques for obtaining difference constraint programs from imperative code. (4) We report on a thorough experimental comparison of state-of-the-art bound analysis tools: we set up a tool comparison on (a) a large benchmark of real-world C code, (b) a benchmark built of examples taken from the bound analysis literature and (c) a benchmark of challenging iteration patterns which we found in real source code. (5) Our analysis is more scalable than existing approaches: we discuss how we achieve scalability.", "In this system description, we present the tool AProVE for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and rewrite systems. In addition to classical term rewrite systems (TRSs), AProVE also supports rewrite systems containing built-in integers (int-TRSs). To analyze programs in high-level languages, AProVE automatically converts them to (int-)TRSs. Then, a wide range of techniques is employed to prove termination and to infer complexity bounds for the resulting rewrite systems. The generated proofs can be exported to check their correctness using automatic certifiers. To use AProVE in software construction, we present a corresponding plug-in for the popular Eclipse software development environment.", "One means of analyzing program performance is by deriving closed-form expressions for their execution behavior. This paper discusses the mechanization of such analysis, and describes a system, Metric, which is able to analyze simple Lisp programs and produce, for example, closed-form expressions for their running time expressed in terms of size of input. This paper presents the reasons for mechanizing program analysis, describes the operation of Metric, explains its implementation, and discusses its limitations.", "", "There has been a great deal of research done on the evaluation of the complexity of particular algorithms; little effort, however, has been devoted to the mechanization of this evaluation. The ACE (Automatic Complexity Evaluator) system is able to analyze reasonably large programs, like sorting programs, in a fully mechanical way. A time-complexity function is derived from the initial functional program. This function is transformed into its nonrecursive equivalent according to MacCarthy's recursion induction principle, using a predefined library of recursive definitions. As the complexity is not a decidable property, this transformation will not be possible in all cases. The richer the predefined library is, the more likely the system is to succeed. The operations performed by ACE are described and the use of the system is illustrated with the analysis of a sorting algorithm. Related works and further improvements are discussed in the conclusion.", "Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form." ] }
1906.10047
2950357695
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language---representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid.
In the vast literature on bound analysis in various forms, there are a few other works that give a complete solution for a weak language. are considered by @cite_9 @cite_7 . Size-change programs abstract away nearly everything in the program, leaving a control-flow graph annotated with assertions about variables that decrease (or do not increase) in a transition. Thus, it does not assume structured and explicit loops, and it cannot express information about values that increase. Both works yield tight bounds on the number of transitions until termination.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "80442178", "2218585456" ], "abstract": [ "Max-plus automata (over ℕ ∪ − ∞) are finite devices that map input words to non-negative integers or − ∞. In this paper we present (a) an algorithm allowing to compute the asymptotic behaviour of max-plus automata, and (b) an application of this technique to the evaluation of the computational time complexity of programs.", "The size-change abstraction (SCA) is a popular program abstraction for termination analysis, and has been successfully implemented for imperative, functional and logic programs. Recently, it has been shown that SCA is also an attractive domain for the automatic analysis of the computational complexity of programs. In this paper, we provide asymptotically precise ranking functions for the special case of deterministic size-change systems. As a consequence we also obtain the result that the asymptotic complexity of deterministic size-change systems is exactly polynomial and that the exact integer exponent can be computed in PSPACE." ] }
1906.10047
2950357695
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language---representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid.
Dealing with a somewhat different problem, @cite_6 @cite_15 both check, or find, in the form of polynomial equations. We find it remarkable that they give complete solutions for weak languages, where the weakness lies in the non-deterministic control-flow, as in our language. If one could give a complete solution for polynomial , this would imply a solution to our problem as well.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "2963164018", "2003141394" ], "abstract": [ "We exhibit an algorithm to compute the strongest polynomial (or algebraic) invariants that hold at each location of a given affine program (i.e., a program having only non-deterministic (as opposed to conditional) branching and all of whose assignments are given by affine expressions). Our main tool is an algebraic result of independent interest: given a finite set of rational square matrices of the same dimension, we show how to compute the Zariski closure of the semigroup that they generate.", "We present two automatic program analyses. The first analysis checks if a given polynomial relation holds among the program variables whenever control reaches a given program point. It fully interprets assignment statements with polynomial expressions on the right-hand side and polynomial disequality guards. Other assignments are treated as non-deterministically assigning any value and guards that are not polynomial disequalities are ignored. The second analysis extends this checking procedure. It computes the set of all polynomial relations of an arbitrary given form that are valid at a given target program point. It is also complete up to the abstraction described above." ] }
1906.10047
2950357695
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language---representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid.
The problem for semigroups of matrices is related to our work as well (though we have not discovered this until recently, due to the work being done in an entirely different context). Specifically, @cite_29 gives an algorithm that can be expressed in the language of our work as follows: given a Simple Disjunctive Loop in which all polynomials are linear, the algorithm decides if the loop is polynomially bounded and if it is, returns the highest degree of @math in the tight polynomial upper bound (over all variables). A closer inspection shows that it can actually determine the degree in which @math enters the bound for every variable. Thus, it solves a certain aspect of the SDL analysis problem. Their algorithm is polynomial-time and uses an approach similar to @cite_22 .
{ "cite_N": [ "@cite_29", "@cite_22" ], "mid": [ "1622830285", "1548801608" ], "abstract": [ "For a given finite set Sigma of matrices with nonnegative integer entries we study the growth with t of max parallel to A(1)... A(t)parallel to : A(i) epsilon Sigma . We show how to determine in polynomial time whether this growth is bounded, polynomial, or exponential, and we characterize all possible behaviors. (c) 2007 Elsevier Inc. All rights reserved.", "We present a new method for inferring complexity properties for imperative programs with bounded loops. The properties handled are: polynomial (or linear) boundedness of computed values, as a function of the input; and similarly for the running time. It is well known that complexity properties are undecidable for a Turing-complete programming language. Much work in program analysis overcomes this obstacle by relaxing the correctness notion: one does not ask for an algorithm that correctly decides whether the property of interest holds or not, but only for \"yes\" answers to be sound. In contrast, we reshaped the problem by defining a \"core\" programming language that is Turing-incomplete, but strong enough to model real programs of interest. For this language, our method is the first to give a certain answer; in other words, our inference is both sound and complete. The essence of the method is that every command is assigned a \"complexity certificate\", which is a concise specification of dependencies of output values on input. These certificates are produced by inference rules that are compositional and efficiently computable. The approach is inspired by previous work by Niggl and Wunderlich and by Jones and Kristiansen, but use a novel, more expressive kind of certificates." ] }
1907.01055
2953962706
A major obstacle to the development of Natural Language Processing (NLP) methods in the biomedical domain is data accessibility. This problem can be addressed by generating medical data artificially. Most previous studies have focused on the generation of short clinical text, and evaluation of the data utility has been limited. We propose a generic methodology to guide the generation of clinical text with key phrases. We use the artificial data as additional training data in two key biomedical NLP tasks: text classification and temporal relation extraction. We show that artificially generated training data used in conjunction with real training data can lead to performance boosts for data-greedy neural network algorithms. We also demonstrate the usefulness of the generated data for NLP setups where it fully replaces real training data.
Existing vanilla models mainly focus on local lexical decisions which limits their ability to model the global integrity of the text. This issue can be tackled by varying the generation conditions: e.g., guiding the generation with prompts @cite_17 , with named entities @cite_5 or template-based generation @cite_8 . All these conditions serve as binding elements to relate generated sentences and ensure the cohesion of the resulting text.
{ "cite_N": [ "@cite_5", "@cite_8", "@cite_17" ], "mid": [ "", "2951103768", "2798664956" ], "abstract": [ "", "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoder-decoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semi-markov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoder-decoder text generation models.", "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
aims at designing and creating physical artifacts with the help of computational methods. A large class of methods addresses inverse design problems by incorporating fabrication limitations in geometric design algorithms via constrained optimization or the integration of fast simulation techniques @cite_63 @cite_12 . This line of research enables the design of objects with a wide range of controllable physical and mechanical properties, such as appearance @cite_64 @cite_36 @cite_14 @cite_83 , deformation @cite_48 @cite_42 @cite_17 @cite_3 , articulation @cite_85 @cite_45 @cite_87 , and mechanical motion @cite_73 @cite_76 @cite_33 @cite_51 @cite_72 . Some existing contributions also investigated how to instantiate virtual characters as 3D-printable physical entities like mechanical robots @cite_73 @cite_18 . Yet, these methods merely focus on robots consisting of rigid links and basic balancing constraints and or velocity limits.
{ "cite_N": [ "@cite_64", "@cite_87", "@cite_36", "@cite_42", "@cite_85", "@cite_3", "@cite_72", "@cite_18", "@cite_48", "@cite_17", "@cite_83", "@cite_76", "@cite_73", "@cite_12", "@cite_14", "@cite_33", "@cite_45", "@cite_63", "@cite_51" ], "mid": [ "", "1994976116", "1985025469", "1983512267", "2011032066", "", "2811027829", "2028208724", "2082254676", "1981948516", "2810334442", "1976494588", "2025338486", "2810733959", "2810000000", "2739068909", "2014937114", "2903151799", "2736736250" ], "abstract": [ "", "Articulated deformable characters are widespread in computer animation. Unfortunately, we lack methods for their automatic fabrication using modern additive manufacturing (AM) technologies. We propose a method that takes a skinned mesh as input, then estimates a fabricatable single-material model that approximates the 3D kinematics of the corresponding virtual articulated character in a piecewise linear manner. We first extract a set of potential joint locations. From this set, together with optional, user-specified range constraints, we then estimate mechanical friction joints that satisfy inter-joint non-penetration and other fabrication constraints. To avoid brittle joint designs, we place joint centers on an approximate medial axis representation of the input geometry, and maximize each joint's minimal cross-sectional area. We provide several demonstrations, manufactured as single, assembled pieces using 3D printers.", "Multi-material 3D printing allows objects to be composed of complex, heterogenous arrangements of materials. It is often more natural to define a functional goal than to define the material composition of an object. Translating these functional requirements to fabri-cable 3D prints is still an open research problem. Recently, several specific instances of this problem have been explored (e.g., appearance or elastic deformation), but they exist as isolated, monolithic algorithms. In this paper, we propose an abstraction mechanism that simplifies the design, development, implementation, and reuse of these algorithms. Our solution relies on two new data structures: a reducer tree that efficiently parameterizes the space of material assignments and a tuner network that describes the optimization process used to compute material arrangement. We provide an application programming interface for specifying the desired object and for defining parameters for the reducer tree and tuner network. We illustrate the utility of our framework by implementing several fabrication algorithms as well as demonstrating the manufactured results.", "We present a method for fabrication-oriented design of actuated deformable characters that allows a user to automatically create physical replicas of digitally designed characters using rapid manufacturing technologies. Given a deformable character and a set of target poses as input, our method computes a small set of actuators along with their locations on the surface and optimizes the internal material distribution such that the resulting character exhibits the desired deformation behavior. We approach this problem with a dedicated algorithm that combines finite-element analysis, sparse regularization, and constrained optimization. We validate our pipeline on a set of two- and three-dimensional example characters and present results in simulation and physically-fabricated prototypes.", "Although there is an abundance of 3D models available, most of them exist only in virtual simulation and are not immediately usable as physical objects in the real world. We solve the problem of taking as input a 3D model of a man-made object, and automatically generating the parts and connectors needed to build the corresponding physical object. We focus on furniture models, and we define formal grammars for IKEA cabinets and tables. We perform lexical analysis to identify the primitive parts of the 3D model. Structural analysis then gives structural information to these parts, and generates the connectors (i.e. nails, screws) needed to attach the parts together. We demonstrate our approach with arbitrary 3D models of cabinets and tables available online.", "", "We propose a computation-driven approach to design optimization and motion synthesis for robotic creatures that locomote using arbitrary arrangements of legs and wheels.", "We present an interactive design system that allows casual users to quickly create 3D-printable robotic creatures. Our approach automates the tedious parts of the design process while providing ample room for customization of morphology, proportions, gait and motion style. The technical core of our framework is an efficient optimization-based solution that generates stable motions for legged robots of arbitrary designs. An intuitive set of editing tools allows the user to interactively explore the space of feasible designs and to study the relationship between morphological features and the resulting motions. Fabrication blueprints are generated automatically such that the robot designs can be manufactured using 3D-printing and off-the-shelf servo motors. We demonstrate the effectiveness of our solution by designing six robotic creatures with a variety of morphological features: two, four or five legs, point or area feet, actuated spines and different proportions. We validate the feasibility of the designs generated with our system through physics simulations and physically-fabricated prototypes.", "This paper introduces a data-driven process for designing and fabricating materials with desired deformation behavior. Our process starts with measuring deformation properties of base materials. For each base material we acquire a set of example deformations, and we represent the material as a non-linear stress-strain relationship in a finite-element model. We have validated our material measurement process by comparing simulations of arbitrary stacks of base materials with measured deformations of fabricated material stacks. After material measurement, our process continues with designing stacked layers of base materials. We introduce an optimization process that finds the best combination of stacked layers that meets a user's criteria specified by example deformations. Our algorithm employs a number of strategies to prune poor solutions from the combinatorial search space. We demonstrate the complete process by designing and fabricating objects with complex heterogeneous materials using modern multi-material 3D printers.", "We introduce elastic textures: a set of parametric, tileable, printable, cubic patterns achieving a broad range of isotropic elastic material properties: the softest pattern is over a thousand times softer than the stiffest, and the Poisson's ratios range from below zero to nearly 0.5. Using a combinatorial search over topologies followed by shape optimization, we explore a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. The family can be extended easily to create anisotropic patterns with target orthotropic properties. We demonstrate that our elastic textures are able to achieve a user-supplied varying material property distribution. We also present a material optimization algorithm to choose material properties at each point within an object to best fit a target deformation under a prescribed scenario. We show that, by fabricating these spatially varying materials with elastic textures, the desired behavior is achieved.", "We propose a method using a standard ultraviolet printer for fabricating reflectors that are capable of displaying two or more colored images.", "Mechanical figures that mimic human motions continue to entertain us and capture our imagination. Creating such automata requires expertise in motion planning, knowledge of mechanism design, and familiarity with fabrication constraints. Thus, automaton design remains restricted to only a handful of experts. We propose an automatic algorithm that takes a motion sequence of a humanoid character and generates the design for a mechanical figure that approximates the input motion when driven with a single input crank. Our approach has two stages. The motion approximation stage computes a motion that approximates the input sequence as closely as possible while remaining compatible with the geometric and motion constraints of the mechanical parts in our design. Then, in the layout stage, we solve for the sizing parameters and spatial layout of all the elements, while respecting all fabrication and assembly constraints. We apply our algorithm on a range of input motions taken from motion capture databases. We also fabricate two of our designs to demonstrate the viability of our approach.", "We present an interactive design system that allows non-expert users to create animated mechanical characters. Given an articulated character as input, the user iteratively creates an animation by sketching motion curves indicating how different parts of the character should move. For each motion curve, our framework creates an optimized mechanism that reproduces it as closely as possible. The resulting mechanisms are attached to the character and then connected to each other using gear trains, which are created in a semi-automated fashion. The mechanical assemblies generated with our system can be driven with a single input driver, such as a hand-operated crank or an electric motor, and they can be fabricated using rapid prototyping devices. We demonstrate the versatility of our approach by designing a wide range of mechanical characters, several of which we manufactured using 3D printing. While our pipeline is designed for characters driven by planar mechanisms, significant parts of it extend directly to non-planar mechanisms, allowing us to create characters with compelling 3D motions.", "We present a new method to fabricate 3D models on a robotic printing system so the need for supporting structures can be tremendously reduced by accumulating materials along curved tool-paths.", "We present an efficient and scalable pipeline for 3D printing full-colored objects with spatially varying translucency from practical and accessible input data.", "We present an interactive design system to create functional mechanical objects. Our computational approach allows novice users to retarget an existing mechanical template to a user-specified input shape. Our proposed representation for a mechanical template encodes a parameterized mechanism, mechanical constraints that ensure a physically valid configuration, spatial relationships of mechanical parts to the user-provided shape, and functional constraints that specify an intended functionality. We provide an intuitive interface and optimization-in-the-loop approach for finding a valid configuration of the mechanism and the shape to ensure that higher-level functional goals are met. Our algorithm interactively optimizes the mechanism while the user manipulates the placement of mechanical components and the shape. Our system allows users to efficiently explore various design choices and to synthesize customized mechanical objects that can be fabricated with rapid prototyping technologies. We demonstrate the efficacy of our approach by retargeting various mechanical templates to different shapes and fabricating the resulting functional mechanical objects.", "Additive manufacturing (3D printing) is commonly used to produce physical models for a wide variety of applications, from archaeology to design. While static models are directly supported, it is desirable to also be able to print models with functional articulations, such as a hand with joints and knuckles, without the need for manual assembly of joint components. Apart from having to address limitations inherent to the printing process, this poses a particular challenge for articulated models that should be posable: to allow the model to hold a pose, joints need to exhibit internal friction to withstand gravity, without their parts fusing during 3D printing. This has not been possible with previous printable joint designs. In this paper, we propose a method for converting 3D models into printable, functional, non-assembly models with internal friction. To this end, we have designed an intuitive work-flow that takes an appropriately rigged 3D model, automatically fits novel 3D-printable and posable joints, and provides an interface for specifying rotational constraints. We show a number of results for different articulated models, demonstrating the effectiveness of our method.", "We introduce a computational solution for cost-efficient 3D fabrication using universal building blocks. Our key idea is to employ a set of universal blocks, which can be massively prefabricated at a low cost, to quickly assemble and constitute a significant internal core of the target object, so that only the residual volume need to be 3D printed online. We further improve the fabrication efficiency by decomposing the residual volume into a small number of printing-friendly pyramidal pieces. Computationally, we face a coupled decomposition problem: decomposing the input object into an internal core and residual, and decomposing the residual, to fulfill a combination of objectives for efficient 3D fabrication. To this end, we formulate an optimization that jointly minimizes the residual volume, the number of pyramidal residual pieces, and the amount of support waste when printing the residual pieces. To solve the optimization in a tractable manner, we start with a maximal internal core and iteratively refine it with local cuts to minimize the cost function. Moreover, to efficiently explore the large search space, we resort to cost estimates aided by pre-computation and avoid the need to explicitly construct pyramidal decompositions for each solution candidate. Results show that our method can iteratively reduce the estimated printing time and cost, as well as the support waste, and helps to save hours of fabrication time and much material consumption.", "We present a computational tool for designing compliant mechanisms. Our method takes as input a conventional, rigidly-articulated mechanism defining the topology of the compliant design. This input can be both planar or spatial, and we support a number of common joint types which, whenever possible, are automatically replaced with parameterized flexures. As the technical core of our approach, we describe a number of objectives that shape the design space in a meaningful way, including trajectory matching, collision avoidance, lateral stability, resilience to failure, and minimizing motor torque. Optimal designs in this space are obtained as solutions to an equilibrium-constrained minimization problem that we solve using a variant of sensitivity analysis. We demonstrate our method on a set of examples that range from simple four-bar linkages to full-fledged animatronics, and verify the feasibility of our designs by manufacturing physical prototypes." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
@cite_23 proposed a process for designing synthetic skin and actuation parameters for animatronic characters that mimic facial expressions of a given subject. @cite_42 optimized the internal material distribution so that the resulting character exhibits the desired deformation behavior. Focusing on actuation, @cite_54 computed the layout of winch-tendon networks to animate plush toys, and @cite_78 optimized the chamber structure and material distribution for designing soft pneumatic objects.
{ "cite_N": [ "@cite_54", "@cite_42", "@cite_78", "@cite_23" ], "mid": [ "2738703359", "1983512267", "2770227623", "1990940174" ], "abstract": [ "We present a computational approach to creating animated plushies, soft robotic plush toys specifically-designed to reenact user-authored motions. Our design process is inspired by muscular hydrostat structures, which drive highly versatile motions in many biological systems. We begin by instrumenting simulated plush toys with a large number of small, independently-actuated, virtual muscle-fibers. Through an intuitive posing interface, users then begin animating their plushie. A novel numerical solver, reminiscent of inverse-kinematics, computes optimal contractions for each muscle-fiber such that the soft body of the plushie deforms to best match user input. By analyzing the co-activation patterns of the fibers that contribute most to the plushie's motions, our design system generates physically-realizable winch-tendon networks. Winch-tendon networks model the motorized cable-driven actuation mechanisms that drive the motions of our real-life plush toy prototypes. We demonstrate the effectiveness of our computational approach by co-designing motions and actuation systems for a variety of physically-simulated and fabricated plushies.", "We present a method for fabrication-oriented design of actuated deformable characters that allows a user to automatically create physical replicas of digitally designed characters using rapid manufacturing technologies. Given a deformable character and a set of target poses as input, our method computes a small set of actuators along with their locations on the surface and optimizes the internal material distribution such that the resulting character exhibits the desired deformation behavior. We approach this problem with a dedicated algorithm that combines finite-element analysis, sparse regularization, and constrained optimization. We validate our pipeline on a set of two- and three-dimensional example characters and present results in simulation and physically-fabricated prototypes.", "We present an end-to-end solution for design and fabrication of soft pneumatic objects with desired deformations. Given a 3D object with its rest and deformed target shapes, our method automatically optimizes the chamber structure and material distribution inside the object volume so that the fabricated object can deform to all the target deformed poses with controlled air injection. To this end, our method models the object volume with a set of chambers separated by material shells. Each chamber has individual channels connected to the object surface and thus can be separately controlled with a pneumatic system, while the shell is comprised of base material with an embedded frame structure. A two-step algorithm is developed to compute the geometric layout of the chambers and frame structure as well as the material properties of the frame structure from the input. The design results can be fabricated with 3D printing and deformed by a controlled pneumatic system. We validate and demonstrate the efficacy of our method with soft pneumatic objects that have different shapes and deformation behaviors.", "We propose a complete process for designing, simulating, and fabricating synthetic skin for an animatronics character that mimics the face of a given subject and its expressions. The process starts with measuring the elastic properties of a material used to manufacture synthetic soft tissue. Given these measurements we use physics-based simulation to predict the behavior of a face when it is driven by the underlying robotic actuation. Next, we capture 3D facial expressions for a given target subject. As the key component of our process, we present a novel optimization scheme that determines the shape of the synthetic skin as well as the actuation parameters that provide the best match to the target expressions. We demonstrate this computational skin design by physically cloning a real human face onto an animatronics figure." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
The seminal work by Witkin and Kass @cite_50 generated motion trajectories by optimizing physical constraints and animator controls at key frames, a well-known space-time constraints framework for animation. With proper motion data, the space-time optimization produces realistic articulated motions for bipedal or multilegged characters through different physical properties @cite_46 @cite_41 @cite_22 @cite_49 @cite_77 @cite_86 @cite_59 @cite_79 @cite_15 . It can be used to transform motion capture data into physically plausible motions @cite_44 .
{ "cite_N": [ "@cite_22", "@cite_15", "@cite_41", "@cite_44", "@cite_77", "@cite_79", "@cite_49", "@cite_50", "@cite_59", "@cite_86", "@cite_46" ], "mid": [ "1556798239", "2075005272", "1987706689", "2125298102", "2051567001", "2014664434", "2083334152", "", "2099824239", "2139093517", "2163574775" ], "abstract": [ "This dissertation presents a hierarchical controller which can learn to perform complex motor skills. Humans routinely coordinate many degrees of freedom smoothly and effortlessly to achieve complex goals. Moreover, we are good at learning new patterns of coordination to produce new skills. Robots and artificial systems, on the other hand, typically have difficulty with the kinds of behaviors that come most naturally to us. Skills such as running, skiing, playing basketball, or diving involve complex nonlinear dynamics, many degrees of freedom, and behavioral goals that can be difficult to specify mathematically; goals such as “ski down the mountain without falling down” or “shoot a layup” must be translated from linguistic requirements into dynamic system constraints. The focus in this dissertation will be on the skill of platform diving, in which the diver's goal is to execute a certain dive and enter the water in a fully-extended, vertical position. Controlling a simulated diver is a difficult problem for standard control and planning algorithms; conservation of angular momentum gives the system dynamics a nonholonomic constraint with nonlinear drift. In this dissertation, ideas from the fields of biological motor control and learning are combined with new learning algorithms in the design of a hierarchical controller which learns to dive. At the lower level of the control hierarchy, each degree of freedom in the diver's joints is assigned a controller based on biological pattern generators for fast, single-joint movements. These controllers contain neural networks, which are trained on data generated by simulation. The higher level of the control hierarchy incorporates ideas from human skill learning: to achieve a desired behavior pattern, a human learning a new skill uses information from instructors and from watching other performers to build a mental model of the task requirements, and then practices to refine the parameters of this behavioral model. In the high-level controller, each dive is represented as a sequence of multi-joint synergies. The controller learns initial estimates of the timing of these synergies from observational data and then refines these estimates through Q-learning with repeated simulations.", "We present a technique for analyzing a set of animal gaits to predict the gait of a new animal from its shape alone. This method works on a wide range of bipeds and quadrupeds, and adapts the motion style to the size and shape of the animal. We achieve this by combining inverse optimization with sparse data interpolation. Starting with a set of reference walking gaits extracted from sagittal plane video footage, we first use inverse optimization to learn physically motivated parameters describing the style of each of these gaits. Given a new animal, we estimate the parameters describing its gait with sparse data interpolation, then solve a forward optimization problem to synthesize the final gait. To improve the realism of the results, we introduce a novel algorithm called joint inverse optimization which learns coherent patterns in motion style from a database of example animal-gait pairs. We quantify the predictive performance of our model by comparing its synthesized gaits to ground truth walking motions for a range of different animals. We also apply our method to the prediction of gaits for dinosaurs and other extinct creatures.", "This paper describes new techniques to design physically based, goal directed motion of synthetic creatures. More specifically, it concentrates on developing an interactive framework for specifying constraints and objectives for the motion, and for guiding the numericrd solution of the optimization problem thus defined. The ability to define, modify and guide constrained spacetime problems is provided through an interactive user interface. Innovations that are introduced include, (1) the subdivision of spacetime into discrete pieces, or Spacetime Windows, over which subproblems can be formulated and solved, (2) the use of cubic B-spline approximation techniques to define a C2 function for the creature’s time dependent degrees of freedom, (3) the use of both symbolic and numerical processes to construct and solve the constrained optimization problem, and (4) the ability to specify inequality and conditional constraints. Creatures, in the context of this work, consist of rigid links connected by joints defining a set of generalized degrees of freedom. Hybrid symbolic and numeric techniques to solve the resulting complex constrained optimization problems are made possible by the special structure of physically based models of such creatures, and by the recent development of symbolic algebraic languages. A graphical user interface process handles communication between the user and two other processes; one devoted to symbolic differentiation and manipulation of the constraints and objectives, and one that performs the iterative numerical solution of the optimization problem. The user interface itself provides both high and low level definition of, interaction with, and inspection of, the optimization process and the resulting animation. Implementation issues and experiments with the Spacetime Windows system are discussed,", "We introduce a novel algorithm for transforming character animation sequences that preserves essential physical properties of the motion. By using the spacetime constraints dynamics formulation our algorithm maintains realism of the original motion sequence without sacrificing full user control of the editing process. In contrast to most physically based animation techniques that synthesize motion from scratch, we take the approach of motion transformationas the underlying paradigm for generating computer animations. In doing so, we combine the expressive richness of an input animation sequence with the controllability of spacetime optimization to create a wide range of realistic character animations. The spacetime dynamics formulation also allows editing of intuitive, high-level motion concepts such as the time and placement of footprints, length and mass of various extremities, number of body joints and gravity. Our algorithm is well suited for the reuse of highly-detailed captured motion animations. In addition, we describe a new methodology for mapping a motion between characters with drastically different numbers of degrees of freedom. We use this method to reduce the complexity of the spacetime optimization problems. Furthermore, our approach provides a paradigm for controlling complex dynamic and kinematic systems with simpler ones.", "Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving this problem for complex characters such as humans is due in part to the high dimensionality of the search space. The dimensionality is an artifact of the problem representation because most dynamic human behaviors are intrinsically low dimensional with, for example, legs and arms operating in a coordinated way. We describe a method that exploits this observation to create an optimization problem that is easier to solve. Our method utilizes an existing motion capture database to find a low-dimensional space that captures the properties of the desired behavior. We show that when the optimization problem is solved within this low-dimensional subspace, a sparse sketch can be used as an initial guess and full physics constraints can be enabled. We demonstrate the power of our approach with examples of forward, vertical, and turning jumps; with running and walking; and with several acrobatic flips.", "This article shows how statistical motion priors can be combined seamlessly with physical constraints for human motion modeling and generation. The key idea of the approach is to learn a nonlinear probabilistic force field function from prerecorded motion data with Gaussian processes and combine it with physical constraints in a probabilistic framework. In addition, we show how to effectively utilize the new model to generate a wide range of natural-looking motions that achieve the goals specified by users. Unlike previous statistical motion models, our model can generate physically realistic animations that react to external forces or changes in physical quantities of human bodies and interaction environments. We have evaluated the performance of our system by comparing against ground-truth motion data and alternative methods.", "Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint torques; obtaining first derivatives for these parameters is generally an O(D2) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints. Considering only constraints and objective functions that lead to linear time first derivatives results in fast per-iteration computation times and an optimization problem that appears to scale well to more complex characters. We show that qualities such as squash-and-stretch that are expected from physically based optimization result from our approach. Our animation system is particularly useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom.", "", "We present a fully automatic method for generating gaits and morphologies for legged animal locomotion. Given a specific animal's shape we can determine an efficient gait with which it can move. Similarly, we can also adapt the animal's morphology to be optimal for a specific locomotion task. We show that determining such gaits is possible without the need to specify a good initial motion, and without manually restricting the allowed gaits of each animal. Our approach is based on a hybrid optimization method which combines an efficient derivative-aware spacetime constraints optimization with a derivative-free approach able to find non-local solutions in high-dimensional discontinuous spaces. We demonstrate the effectiveness of this approach by synthesizing dynamic locomotions of bipeds, a quadruped, and an imaginary five-legged creature.", "Adaptation of ballistic motion demands a technique that can make required adjustments in anticipation of flight periods when only some physically consistent changes are possible. This article describes a numerical procedure that adjusts a physically consistent motion to fulfill new adaptation requirements expressed in kinematic and dynamic constraints. This iterative procedure refines the original motion with a sequence of minimal adjustments, implicitly favoring motions that are similar to the original performance, and transforming any input motion, including those that are difficult to characterize with an objective function. In total, over twenty adaptations were generated from two recorded performances, a run and a jump, by varying foot placement, restricting muscle use, adding new environment constraints, and changing the length and mass of specific limbs.", "In this paper we describe a technique to generate realistic human movement, specifically platform dives, by solving an optimal control problem requiring little a priori information. Solving the optimal control problem reliably requires computing exact analytic gradients of the objective function, which is made possible by a hybrid recursive algorithm that calculates the dynamics of the system. This algorithm is formulated with Lie algebra techniques and matrix exponentials, resulting in equations that are easily differentiable. This quality is essential when solving ill-conditioned systems such as the diver. Also, the importance of initial conditions in light of the constant angular momentum constraint is discussed." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
The locomotion controller aims to compute joint torques or control forces to drive the locomotion behaviors of articulated figures. The joint torques are usually calculated via the proportional and derivative (PD) controller such that the rigid skeleton of a character follows designated joint angle trajectories @cite_7 @cite_60 @cite_67 . Balance control strategies, such as the swing foot placement or zero moment point, are essential to generating stable locomotions. @cite_7 @cite_60 @cite_24 @cite_38 @cite_8 @cite_6 @cite_61 @cite_56 . Continuous adaptation of the target joint trajectory for balancing a walking human was developed in @cite_5 . Controllers that produce highly dynamic skills for human animation were suggested in @cite_4 @cite_57 @cite_66 . The joint torques can also be computed via optimal control to approximate the motion capture data or motion data from kinematic simulators @cite_52 @cite_37 .
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_67", "@cite_37", "@cite_4", "@cite_7", "@cite_60", "@cite_8", "@cite_52", "@cite_6", "@cite_56", "@cite_24", "@cite_57", "@cite_5", "@cite_66" ], "mid": [ "2122996380", "2007272966", "2043709302", "2120003827", "1990932462", "2043878167", "2120894402", "2065942939", "2089528820", "2140640599", "2078940713", "2070038508", "1990922788", "2001165022", "2076806230" ], "abstract": [ "This paper proposes a new control scheme of decentralized multi-legged robot based on Follow-the-Contact-Point (FCP) gait control. In this control scheme, the first legs contact the points allocated on the terrain and the following legs touch the foot on the point which the fore leg is contacting. By creating adequate contacting points on the environment, the robot can be navigated successfully. Since the position information of the contacting points is relayed based on physical information of the legs, each leg does not need the global position of the contacting point. As a result, the proposed control scheme realizes decentralized architecture. This paper introduces the control law for the robot walking on even terrain. Finally, the result of physical simulation of 20-legged robot shows the availability of the proposed method.", "This paper describes an algorithm for automatically adapting existing simulated behaviors to new characters. Animating a new character is difficult because a control system tuned for one character will not, in general, work on a character with different limb lengths, masses, or moments of inertia. The algorithm presented here adapts the control system to a new character in two stages. First, the control system parameters are scaled based on the sizes, masses, and moments of inertia of the new and the original characters. Then a subset of the parameters is fine-tuned using a search process based on simulated annealing. To demonstrate the effectiveness of this approach, we animate the running motion of a woman, child, and imaginary character by modifying the control system for a man. We also animate the bicycling motion of a second imaginary character by modifying the control system for a man. We evaluate the results of this approach by comparing the motion of the simulated human runners with video of an actual child and with data for men, women, and children in the literature. In addition to adapting a control system for a new model, this approach can also be used to adapt the control system in an on-line fashion to produce a physically realistic metamorphosis from the original to the new model while the morphing character is performing the behavior. We demonstrate this on-line adaptation with a morph from a man to a woman over a period of twenty seconds. CR Categories: I.3.7 [Computer Graphics]: Three Dimensional Graphics and Realism: Animation—; G.1.6 [Numerical Analysis]: Optimization—; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically-Based Modeling", "We develop an integrated set of gaits and skills for a physics-based simulation of a quadruped. The motion repertoire for our simulated dog includes walk, trot, pace, canter, transverse gallop, rotary gallop, leaps capable of jumping on-and-off platforms and over obstacles, sitting, lying down, standing up, and getting up from a fall. The controllers use a representation based on gait graphs, a dual leg frame model, a flexible spine model, and the extensive use of internal virtual forces applied via the Jacobian transpose. Optimizations are applied to these control abstractions in order to achieve robust gaits and leaps with desired motion styles. The resulting gaits are evaluated for robustness with respect to push disturbances and the traversal of variable terrain. The simulated motions are also compared to motion data captured from a filmed dog.", "Dynamically simulated characters are difficult to control because they are underactuated---they have no direct control over their global position and orientation. In order to succeed, control policies must look ahead to determine stabilizing actions, but such planning is complicated by frequent ground contacts that produce a discontinuous search space. This paper introduces a locomotion system that generates high-quality animation of agile movements using nonlinear controllers that plan through such contact changes. We demonstrate the general applicability of this approach by emulating walking and running motions in rigid-body simulations. Then we consolidate these controllers under a higher-level planner that interactively controls the character's direction.", "Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and reference-trajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work.", "This paper is about the use of control algorithms to animate dynamic legged locomotion. Control could free the animator from specifying the details of joint and limb motion while producing both physically realistic and natural looking results. We implemented computer animations of a biped robot, a quadruped robot, and a kangaroo. Each creature was modeled as a linked set of rigid bodies with compliant actuators at its joints. Control algorithms regulated the running speed, organized use of the legs, and maintained balance. All motions were generated by numerically integrating equations of motion derived from the physical models. The resulting behavior included running at various speeds, traveling with several gaits (run, trot, bound, gallop, and hop), jumping, and traversing simple paths. Whereas the use of control permitted a variety of physically realistic animated behavior to be generated with limited human intervention, the process of designing the control algorithms was not automated: the algorithms were \"tweaked\" and adjusted for each new creature.", "This paper describes algorithms for the animation of male and female models performing three dynamic athletic behaviors: running, bicycling, and vaulting. We animate these behaviors using control algorithms that cause a physically realistic model to perform the desired maneuver. For example, control algorithms allow the simulated humans to maintain balance while moving their arms, to run or bicycle at a variety of speeds, and to perform two vaults. For each simulation, we compare the computed motion to that of humans performing similar maneuvers. We perform the comparison both qualitatively through real and simulated video images and quantitatively through simulated and biomechanical data.", "Passive dynamic walking refers to a class of bipedal machines that are able to walk down a gentle slope with no external control or energy input. The legs swing naturally as pendula, and conservation of angular momentum governs the contact of the swing foot with the ground. Previous machines have been limited to planar motions. We extend the planar motions to allow for tilting side to side (roll motion). Passive walking cycles exist, but the roll motion is unstable, resembling that of an inverted pendulum. The instability is due to mismatching of roll velocity with the ground contact conditions. Several strategies are presented for stabilizing this motion, of which the quasi-static control of step width is determined to be both simple and efficient.", "Animating natural human motion in dynamic environments is difficult because of complex geometric and physical interactions. Simulation provides an automatic solution to parts of this problem, but it needs control systems to produce lifelike motions. This paper describes the systematic computation of controllers that can reproduce a range of locomotion styles in interactive simulations. Given a reference motion that describes the desired style, a derived control system can reproduce that style in simulation and in new environments. Because it produces high-quality motions that are both geometrically and physically consistent with simulated surroundings, interactive animation systems could begin to use this approach along with more established kinematic methods.", "We propose a method of maintaining balance for a human-like character against large perturbations. The method enables a human-like model to maintain its balance with active whole-body motion, such as rotating its arms, bending down, and taking a step, if necessary. First, we capture the human motions of maintaining balance and abstract essential mechanisms from these motions. Next, we construct a model of maintaining balance that has a simple structure, such as an inverted pendulum. This model has two modes of maintaining balance: keeping the feet on the ground, and stepping. In this paper, the stepping mode is mainly described. Finally, we generate whole-body motion based on the model against several perturbations, and we discuss the validity of our method", "This paper focuses on the mammal bionic quadruped robots. The main challenge in this field is how to design the highly dynamical and high payload quadruped robots. This paper firstly introduces the history of bionic quadruped robots, particularly the landmark quadruped robots. Then the state-of-the art of drive mode for quadruped robots is reviewed. Subsequently, the development trend of quadruped robots is described. Based on the state-of-the art of quadruped robots, the technical difficulties of bionic quadruped robots are briefly reviewed. And the hydraulic quadruped robot developed in Shandong University is introduced. Finally, the summary and future work of the quadruped robots is given.", "The authors have developed five kinds of biped locomotive robots so far. They are named BIPER-1, 2, 3, 4, and 5. All of them are statically unstable but can perform a dynamically stable walk with suitable control. BIPER-1 and BIPER-2 walk only sideways. BIPER-3 is a stilt-type robot whose foot contacts occur at a point and who can walk sideways, back ward, and forward. BIPER-4's legs have the same degrees of freedom as human legs. BIPER-5 is similar to BIPER-3, but in the case of BIPER-5 all apparatus, such as the computer, are mounted on it.This paper deals with the control theory used for BIPER-3 and BIPER-4. In both cases, basically the same control method is applied. The most important point is that the mo tion of either robot during the single-leg support phase can be approximated by the motion of an inverted pendulum. Ac cordingly, in this paper, dynamic walk is considered to be a series of inverted-pendulum motions with appropriate condi tions of connection.", "We introduce a new method to generate agile and natural human landing motions in real-time via physical simulation without using any mocap or pre-scripted sequences. We develop a general controller that allows the character to fall from a wide range of heights and initial speeds, continuously roll on the ground, and get back on its feet, without inducing large stress on joints at any moment. The character's motion is generated through a forward simulator and a control algorithm that consists of an airborne phase and a landing phase. During the airborne phase, the character optimizes its moment of inertia to meet the ideal relation between the landing velocity and the angle of attack, under the laws of conservation of momentum. The landing phase can be divided into three stages: impact, rolling, and getting-up. To reduce joint stress at landing, the character leverages contact forces to control linear momentum and angular momentum, resulting in a rolling motion which distributes impact over multiple body parts. We demonstrate that our control algorithm can be applied to a variety of initial conditions with different falling heights, orientations, and linear and angular velocities. Simulated results show that our algorithm can effectively create realistic action sequences comparable to real world footage of experienced freerunners.", "Physics-based simulation and control of biped locomotion is difficult because bipeds are unstable, underactuated, high-dimensional dynamical systems. We develop a simple control strategy that can be used to generate a large variety of gaits and styles in real-time, including walking in all directions (forwards, backwards, sideways, turning), running, skipping, and hopping. Controllers can be authored using a small number of parameters, or their construction can be informed by motion capture data. The controllers are applied to 2D and 3D physically-simulated character models. Their robustness is demonstrated with respect to pushes in all directions, unexpected steps and slopes, and unexpected variations in kinematic and dynamic parameters. Direct transitions between controllers are demonstrated as well as parameterized control of changes in direction and speed. Feedback-error learning is applied to learn predictive torque models, which allows for the low-gain control that typifies many natural motions as well as producing smoother simulated motion.", "In this paper we learn the skills required by real-time physics-based avatars to perform parkour-style fast terrain crossing using a mix of running, jumping, speed-vaulting, and drop-rolling. We begin with a single motion capture example of each skill and then learn reduced-order linear feedback control laws that provide robust execution of the motions during forward dynamic simulation. We then parameterize each skill with respect to the environment, such as the height of obstacles, or with respect to the task parameters, such as running speed and direction. We employ a continuation process to achieve the required parameterization of the motions and their affine feedback laws. The continuation method uses a predictor-corrector method based on radial basis functions. Lastly, we build control laws specific to the sequential composition of different skills, so that the simulated character can robustly transition to obstacle clearing maneuvers from running whenever obstacles are encountered. The learned transition skills work in tandem with a simple online step-based planning algorithm, and together they robustly guide the character to achieve a state that is well-suited for the chosen obstacle-clearing motion." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
Our work is inspired by studies on how to drive the soft skin deformation with the underyling rigid skeletons or pseudo muscle force @cite_19 @cite_1 @cite_34 . Two-way coupling of rigid bodies and elastic bodies was considered in @cite_16 . Fast simulation and control of soft robots of various configurations and actuations has also been studied using finite element method and the reduced formulation of compliance matrix @cite_27 @cite_30 @cite_2 @cite_69 @cite_29 .
{ "cite_N": [ "@cite_30", "@cite_69", "@cite_29", "@cite_1", "@cite_16", "@cite_19", "@cite_27", "@cite_2", "@cite_34" ], "mid": [ "1605573896", "2768326939", "2802271488", "2053162587", "1976363785", "1979995593", "2097770951", "2521333807", "1985117280" ], "abstract": [ "Finite Element analysis can provide accurate deformable models for soft-robots. However, using such models is very difficult in a real-time system of control. In this paper, we introduce a generic solution that enables a high-rate control and that is compatible with strong real-time constraints. From a Finite Element analysis, computed at low rate, an inverse model of the robot outputs the setpoint values for the actuator in order to obtain a desired trajectory. This inverse problem uses a QP (quadratic-programming) algorithm based on the equations set by the Finite Element Method. To improve the update rate performances, we propose an asynchronous simulation framework that provides a better trade-off between the deformation accuracy and the computational burden. Complex computations such as accurate FEM deformations are done at low frequency while the control is performed at high frequency with strong real-time constraints. The two simulation loops (high frequency and low frequency loops) are mechanically coupled in order to guarantee mechanical accuracy of the system over time. Finally, the validity of the multi-rate simulation is discussed based on measurements of the evolution in the QP matrix and an experimental validation is conducted to validate the correctness of the high-rate inverse model on a real robot.", "The technological differences between traditional robotics and soft robotics have an impact on all of the modeling tools generally in use, including direct kinematics and inverse models, Jacobians, and dynamics. Due to the lack of precise modeling and control methods for soft robots, the promising concepts of using such design for complex applications (medicine, assistance, domestic robotics...) cannot be practically implemented. This paper presents a first unified software framework dedicated to modeling, simulation and control of soft robots. The framework relies on continuum mechanics for modeling the robotic parts and boundary conditions like actuators or contacts using a unified representation based on Lagrange multipliers. It enables the digital robot to be simulated in its environment using a direct model. The model can also be inverted online using an optimization-based method which allows to control the physical robots in the task space. To demonstrate the effectiveness of the approach, we present various soft robots scenarios including ones where the robot is interacting with its environment. The software has been built on top of SOFA, an open-source framework for deformable online simulation and is available at https: project.inria.fr softrobot", "Abstract This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering ...", "We propose a fast physically-based simulation system for skeleton-driven deformable body characters. Our system can generate realistic motions of self-propelled deformable body characters by considering the two-way interactions among the skeleton, the deformable body, and the environment in the dynamic simulation. It can also compute the passive jiggling behavior of a deformable body driven by a kinematic skelet al motion. We show that a well-coordinated combination of: (1) a reduced deformable body model with nonlinear finite elements, (2) a linear-time algorithm for skeleton dynamics, and (3) explicit integration can boost simulation speed to orders of magnitude faster than existing methods, while preserving modeling accuracy as much as possible. Parallel computation on the GPU has also been implemented to obtain an additional speedup for complicated characters. Detailed discussions of our engineering decisions for speed and accuracy of the simulation system are presented in the article. We tested our approach with a variety of skeleton-driven deformable body characters, and the tested characters were simulated in real time or near real time.", "We propose a framework for the full two-way coupling of rigid and deformable bodies, which is achieved with both a unified time integration scheme as well as individual two-way coupled algorithms at each point of that scheme. As our algorithm is two-way coupled in every fashion, we do not require ad hoc methods for dealing with stability issues or interleaving parts of the simulation. We maintain the ability to treat the key desirable aspects of rigid bodies (e.g. contact, collision, stacking, and friction) and deformable bodies (e.g. arbitrary constitutive models, thin shells, and self-collisions). In addition, our simulation framework supports more advanced features such as proportional derivative controlled articulation between rigid bodies. This not only allows for the robust simulation of a number of new phenomena, but also directly lends itself to the design of deformable creatures with proportional derivative controlled articulated rigid skeletons that interact in a life-like way with their environment.", "In this paper, we investigate the impact of the deformable bodies on the control algorithms for physically simulated characters. We hypothesize that ignoring the effect of deformable bodies at the site of contact negatively affects the control algorithms, leading to less robust and unnatural character motions. To verify the hypothesis, we introduce a compact representation for an articulated character with deformable soft tissue and develop a practical system to simulate two-way coupling between rigid and deformable bodies in a robust and efficient manner. We then apply a few simple and widely used control algorithms, such as pose-space tracking control, Cartesian-space tracking control, and a biped controller (SIMBICON), to simulate a variety of behaviors for both full-body locomotion and hand manipulation. We conduct a series of experiments to compare our results with the motion generated by these algorithms on a character comprising only rigid bodies. The evaluation shows that the character with soft contact can withstand larger perturbations in a more noisy environment, as well as produce more realistic motion.", "In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone.", "This chapter presents new real-time and physics-based modeling methods dedicated to deformable soft robots. In this approach, continuum mechanics provides the partial derivative equations that govern the deformations, and Finite Element Method (FEM) is used to compute numerical solutions adapted to the robot. A formulation based on Lagrange Multipliers is used to model the behavior of the actuators as well as the contact with the environment. Direct and inverse kinematic models are also obtained for real-time control. Some experiments and numerical results are presented.", "We present a physically-based system to simulate and control the locomotion of soft body characters without skeletons. We use the finite element method to simulate the deformation of the soft body, and we instrument a character with muscle fibers to allow it to actively control its shape. To perform locomotion, we use a variety of intuitive controls such as moving a point on the character, specifying the center of mass or the angular momentum, and maintaining balance. These controllers yield an objective function that is passed to our optimization solver, which handles convex quadratic program with linear complementarity constraints. This solver determines the new muscle fiber lengths, and moreover it determines whether each point of contact should remain static, slide, or lift away from the floor. Our system can automatically find an appropriate combination of muscle contractions that enables a soft character to fulfill various locomotion tasks, including walking, jumping, crawling, rolling and balancing." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
focuses on the formulation of an elastic deformation energy and the proper handling of contact constraints to simulate realistic deformations of soft bodies @cite_71 @cite_43 @cite_68 @cite_39 . A comprehensive survey of physics-based elastic deformation models can be found in @cite_75 .
{ "cite_N": [ "@cite_39", "@cite_43", "@cite_71", "@cite_68", "@cite_75" ], "mid": [ "", "2161084987", "1989871863", "1483241718", "2141654056" ], "abstract": [ "", "We review an algorithm for the finite element simulation of elastoplastic solids which is capable of robustly and efficiently handling arbitrarily large deformation. In fact, the model remains valid even when large parts of the mesh are inverted. The algorithm is straightforward to implement and can be used with any material constitutive model, and for both volumetric solids and thin shells such as cloth. We also discuss a mechanism for controlling plastic deformation, which allows a deformable object to be guided towards a desired final shape without sacrificing realistic behavior, and an improved method for rigid body collision handling in the context of mixed explicit implicit time-stepping. Finally, we present a novel extension of our method to arbitrary element types including specific details for hexahedral elements.", "The theory of elasticity describes deformable materials such as rubber, cloth, paper, and flexible met als. We employ elasticity theory to construct differential equations that model the behavior of non-rigid curves, surfaces, and solids as a function of time. Elastically deformable models are active: they respond in a natural way to applied forces, constraints, ambient media, and impenetrable obstacles. The models are fundamentally dynamic and realistic animation is created by numerically solving their underlying differential equations. Thus, the description of shape and the description of motion are unified.", "We present an e cient algorithm for simulation of non-penetrating exible bodies with nonlinear elasticity. We use nite element methods to discretize the continuum model of non-rigid objects and the fast marching level set method to precompute a distance eld for each undeformed body. As the objects deform, the distance elds are deformed accordingly to estimate penetration depth, allowing enforcement of non-penetration constraints between two colliding elastic bodies. This approach can automatically handle self-penetration and inter-penetration in a uniform manner. We combine quasi-viscous Newton's iteration and adaptive-stepsize incremental loading with a predictorcorrector scheme. Our numerical method is able to achieve both numerical stability and e ciency for our simulation. We demonstrate its e ectiveness on a moderately complex animated scene.", "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [ GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element difference volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid smoke animation, and also suggest areas for future research." ] }
1907.00893
2954281063
We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins.Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory.We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype.
Space-time optimization techniques can also be applied to control the motion of elastic bodies that are represented by volumetric meshes. To reduce the number of variables used to control the vertex positions in the optimization, model reduction techniques are frequently used @cite_25 @cite_53 @cite_47 . Barbi c et al @cite_25 imposed the equation of motion constraint in elastic body deformation, using the discrete adjoint method to compute the gradients of control forces. @cite_28 integrated the contact forces as additional variables to handle environment interactions and solved the space-time objective with alternating optimization, but did not handle the two-coupling problem we want to solve.
{ "cite_N": [ "@cite_28", "@cite_53", "@cite_47", "@cite_25" ], "mid": [ "2962969699", "2076897996", "2025412867", "2064330201" ], "abstract": [ "We present a method to automatically animate reduced deformable models using high-level objectives. This is achieved by modelling both reduced dynamics and the deformable object's interactions with the environment.", "We present an interactive animation editor for complex deformable object animations. Given an existing animation, the artist directly manipulates the deformable body at any time frame, and the surrounding animation immediately adjusts in response. The automatic adjustments are designed to respect physics, preserve detail in both the input motion and geometry, respect prescribed bilateral contact constraints, and controllably and smoothly decay in space-time. While the utility of interactive editing for rigid body and articulated figure animations is widely recognized, a corresponding approach to deformable bodies has not been technically feasible before. We achieve interactive rates by combining spacetime model reduction, rotation-strain coordinate warping, linearized elasticity, and direct manipulation. This direct editing tool can serve the final stages of animation production, which often call for detailed, direct adjustments that are otherwise tedious to realize by re-simulation or frame-by-frame editing.", "We present a novel method for elastic animation editing with space-time constraints. In a sharp departure from previous approaches, we not only optimize control forces added to a linearized dynamic model, but also optimize material properties to better match user constraints and provide plausible and consistent motion. Our approach achieves efficiency and scalability by performing all computations in a reduced rotation-strain (RS) space constructed with both cubature and geometric reduction, leading to two orders of magnitude improvement over the original RS method. We demonstrate the utility and versatility of our method in various applications, including motion editing, pose interpolation, and estimation of material parameters from existing animation sequences.", "Keyframe animation is a common technique to generate animations of deformable characters and other soft bodies. With spline interpolation, however, it can be difficult to achieve secondary motion effects such as plausible dynamics when there are thousands of degrees of freedom to animate. Physical methods can provide more realism with less user effort, but it is challenging to apply them to quickly create specific animations that closely follow prescribed animator goals. We present a fast space-time optimization method to author physically based deformable object simulations that conform to animator-specified keyframes. We demonstrate our method with FEM deformable objects and mass-spring systems. Our method minimizes an objective function that penalizes the sum of keyframe deviations plus the deviation of the trajectory from physics. With existing methods, such minimizations operate in high dimensions, are slow, memory consuming, and prone to local minima. We demonstrate that significant computational speedups and robustness improvements can be achieved if the optimization problem is properly solved in a low-dimensional space. Selecting a low-dimensional space so that the intent of the animator is accommodated, and that at the same time space-time optimization is convergent and fast, is difficult. We present a method that generates a quality low-dimensional space using the given keyframes. It is then possible to find quality solutions to difficult space-time optimization problems robustly and in a manner of minutes." ] }
1907.00959
2954769836
Can we reduce the search cost of Neural Architecture Search (NAS) from days down to only few hours? NAS methods automate the design of Convolutional Networks (ConvNets) under hardware constraints and they have emerged as key components of AutoML frameworks. However, the NAS problem remains challenging due to the combinatorially large design space and the significant search time (at least 200 GPU-hours). In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints. We propose a novel differentiable NAS formulation, namely Single-Path NAS, that uses one single-path over-parameterized ConvNet to encode all architectural decisions based on shared convolutional kernel parameters, hence drastically decreasing the search overhead. Single-Path NAS achieves state-of-the-art top-1 ImageNet accuracy (75.62 ), hence outperforming existing mobile NAS methods in similar latency settings ( 80ms). In particular, we enhance the accuracy-runtime trade-off in differentiable NAS by treating the Squeeze-and-Excitation path as a fully searchable operation with our novel single-path encoding. Our method has an overall cost of only 8 epochs (24 TPU-hours), which is up to 5,000x faster compared to prior work. Moreover, we study how different NAS formulation choices affect the performance of the designed ConvNets. Furthermore, we exploit the efficiency of our method to answer an interesting question: instead of empirically tuning the hyperparameters of the NAS solver (as in prior work), can we automatically find the hyperparameter values that yield the desired accuracy-runtime trade-off? We open-source our entire codebase at: this https URL.
While complex ConvNet designs have unlocked unprecedented performance levels in computer vision tasks, the accuracy improvement has come at the cost of higher computational complexity, making the deployment of state-of-the-art ConvNets to mobile devices challenging @cite_16 . To this end, a significant body of prior work aims to co-optimize for the inference latency of ConvNets. Earlier approaches focus on human expertise to introduce hardware-efficient operations @cite_20 @cite_23 . Pruning @cite_24 and quantization @cite_18 methods share the same goal to improve the efficiency of ConvNets.
{ "cite_N": [ "@cite_18", "@cite_24", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "2614392736", "2893585013", "2963163009", "", "2612445135" ], "abstract": [ "Application-specific integrated circuit (ASIC) implementations for Deep Neural Networks (DNNs) have been adopted in many systems because of their higher classification speed. However, although they may be characterized by better accuracy, larger DNNs require significant energy and area, thereby limiting their wide adoption. The energy consumption of DNNs is driven by both memory accesses and computation. Binarized Neural Networks (BNNs), as a trade-off between accuracy and energy consumption, can achieve great energy reduction, and have good accuracy for large DNNs due to its regularization effect. However, BNNs show poor accuracy when a smaller DNN configuration is adopted. In this paper, we propose a new DNN model, LightNN, which replaces the multiplications to one shift or a constrained number of shifts and adds. For a fixed DNN configuration, LightNNs have better accuracy at a slight energy increase than BNNs, yet are more energy efficient with only slightly less accuracy than conventional DNNs. Therefore, LightNNs provide more options for hardware designers to make trade-offs between accuracy and energy. Moreover, for large DNN configurations, LightNNs have a regularization effect, making them better in accuracy than conventional DNNs. These conclusions are verified by experiment using the MNIST and CIFAR-10 datasets for different DNN configurations.", "Resource-efficient convolution neural networks enable not only the intelligence on edge devices but also opportunities in system-level optimization such as scheduling. In this work, we aim to improve the performance of resource-constrained filter pruning by merging two sub-problems commonly considered, i.e., (i) how many filters to prune for each layer and (ii) which filters to prune given a per-layer pruning budget, into a global filter ranking problem. Our framework entails a novel algorithm, dubbed layer-compensated pruning, where meta-learning is involved to determine better solutions. We show empirically that the proposed algorithm is superior to prior art in both effectiveness and efficiency. Specifically, we reduce the accuracy gap between the pruned and original networks from 0.9 to 0.7 with 8x reduction in time needed for meta-learning, i.e., from 1 hour down to 7 minutes. To this end, we demonstrate the effectiveness of our algorithm using ResNet and MobileNetV2 networks under CIFAR-10, ImageNet, and Bird-200 datasets.", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.", "", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization." ] }
1907.00959
2954769836
Can we reduce the search cost of Neural Architecture Search (NAS) from days down to only few hours? NAS methods automate the design of Convolutional Networks (ConvNets) under hardware constraints and they have emerged as key components of AutoML frameworks. However, the NAS problem remains challenging due to the combinatorially large design space and the significant search time (at least 200 GPU-hours). In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints. We propose a novel differentiable NAS formulation, namely Single-Path NAS, that uses one single-path over-parameterized ConvNet to encode all architectural decisions based on shared convolutional kernel parameters, hence drastically decreasing the search overhead. Single-Path NAS achieves state-of-the-art top-1 ImageNet accuracy (75.62 ), hence outperforming existing mobile NAS methods in similar latency settings ( 80ms). In particular, we enhance the accuracy-runtime trade-off in differentiable NAS by treating the Squeeze-and-Excitation path as a fully searchable operation with our novel single-path encoding. Our method has an overall cost of only 8 epochs (24 TPU-hours), which is up to 5,000x faster compared to prior work. Moreover, we study how different NAS formulation choices affect the performance of the designed ConvNets. Furthermore, we exploit the efficiency of our method to answer an interesting question: instead of empirically tuning the hyperparameters of the NAS solver (as in prior work), can we automatically find the hyperparameter values that yield the desired accuracy-runtime trade-off? We open-source our entire codebase at: this https URL.
NAS methods aim to automate the design of ConvNets based on reinforcement learning (RL), evolutionary algorithms, or gradient-based formulations @cite_1 @cite_43 @cite_3 @cite_41 @cite_13 . Earlier approaches train an agent (, RNN controller) by sampling candidate architectures over a cell-based design space, where the same cell is repeated in all layers and the focus is on searching the cell architecture @cite_13 . Nonetheless, training the controller over different architectures makes the search costly. An increasing number of recent methods motivate the need for alleviating the NAS search cost @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_1", "@cite_3", "@cite_43", "@cite_13" ], "mid": [ "2960010704", "2963374479", "2810075754", "2785430118", "2785366763", "2964081807" ], "abstract": [ "", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "", "The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset." ] }
1907.01058
2955823704
Assigning team labels to players in a sport game is not a trivial task when no prior is known about the visual appearance of each team. Our work builds on a Convolutional Neural Network (CNN) to learn a descriptor, namely a pixel-wise embedding vector, that is similar for pixels depicting players from the same team, and dissimilar when pixels correspond to distinct teams. The advantage of this idea is that no per-game learning is needed, allowing efficient team discrimination as soon as the game starts. In principle, the approach follows the associative embedding framework introduced in arXiv:1611.05424 to differentiate instances of objects. Our work is however different in that it derives the embeddings from a lightweight segmentation network and, more fundamentally, because it considers the assignment of the same embedding to unconnected pixels, as required by pixels of distinct players from the same team. Excellent results, both in terms of team labelling accuracy and generalization to new games arenas, have been achieved on panoramic views of a large variety of basketball games involving players interactions and occlusions. This makes our method a good candidate to integrate team separation in many CNN-based sport analytics pipelines.
Recent developments in computer vision make an extensive use of Convolutional Neural Networks @cite_16 . This section reviews the specific type of CNNs , named Fully Convolutional Network ( FCN ), that is used for image segmentation. It then introduces the recent associative embedding methods considered to turn object class segmentation into object instance segmentation.
{ "cite_N": [ "@cite_16" ], "mid": [ "2117539524" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1907.01058
2955823704
Assigning team labels to players in a sport game is not a trivial task when no prior is known about the visual appearance of each team. Our work builds on a Convolutional Neural Network (CNN) to learn a descriptor, namely a pixel-wise embedding vector, that is similar for pixels depicting players from the same team, and dissimilar when pixels correspond to distinct teams. The advantage of this idea is that no per-game learning is needed, allowing efficient team discrimination as soon as the game starts. In principle, the approach follows the associative embedding framework introduced in arXiv:1611.05424 to differentiate instances of objects. Our work is however different in that it derives the embeddings from a lightweight segmentation network and, more fundamentally, because it considers the assignment of the same embedding to unconnected pixels, as required by pixels of distinct players from the same team. Excellent results, both in terms of team labelling accuracy and generalization to new games arenas, have been achieved on panoramic views of a large variety of basketball games involving players interactions and occlusions. This makes our method a good candidate to integrate team separation in many CNN-based sport analytics pipelines.
In recent works dealing with sport video analysis, FCNs have been considered for specific segmentation tasks, including player jersey number extraction @cite_25 , soccer field lines and players segmentation @cite_22 . @cite_5 , a two-steps architecture, inspired by @cite_28 and @cite_26 , is even proposed to extract players bounding-boxes with team labels. The network however needs to be trained on a game-per-game basis, which is impractical for large scale deployment. None of these works is thus able to differentiate player teams without requiring a dedicated training for each game, as proposed in , where a real-time amenable FCN provides the player segmentation mask, as well as a pixel-wise team-discriminant feature vector.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_28", "@cite_5", "@cite_25" ], "mid": [ "2963840672", "2892012605", "2474389331", "2794012889", "2608840289" ], "abstract": [ "Abstract: State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "Automatic interpretation of sports games is a major challenge, especially when these sports feature complex players organizations and game phases. This paper describes a bottom-up approach based on the extraction of semantic features from the video stream of the main camera in the particular case of soccer using scene-specific techniques. In our approach, all the features, ranging from the pixel level to the game event level, have a semantic meaning. First, we design our own scene-specific deep learning semantic segmentation network and hue histogram analysis to extract pixel-level semantics for the field, players, and lines. These pixel-level semantics are then processed to compute interpretative semantic features which represent characteristics of the game in the video stream that are exploited to interpret soccer. For example, they correspond to how players are distributed in the image or the part of the field that is filmed. Finally, we show how these interpretative semantic features can be used to set up and train a semantic-based decision tree classifier for major game events with a restricted amount of training data. The main advantages of our semantic approach are that it only requires the video feed of the main camera to extract the semantic features, with no need for camera calibration, field homography, player tracking, or ball position estimation. While the automatic interpretation of sports games remains challenging, our approach allows us to achieve promising results for the semantic feature extraction and for the classification between major soccer game events such as attack, goal or goal opportunity, defense, and middle game.", "In this paper, we investigate two new strategies to detect objects accurately and efficiently using deep convolutional neural network: 1) scale-dependent pooling and 2) layerwise cascaded rejection classifiers. The scale-dependent pooling (SDP) improves detection accuracy by exploiting appropriate convolutional features depending on the scale of candidate object proposals. The cascaded rejection classifiers (CRC) effectively utilize convolutional features and eliminate negative object proposals in a cascaded manner, which greatly speeds up the detection while maintaining high accuracy. In combination of the two, our method achieves significantly better accuracy compared to other state-of-the-arts in three challenging datasets, PASCAL object detection challenge, KITTI object detection benchmark and newly collected Inner-city dataset, while being more efficient.", "Abstract Vision-based player detection and classification are important in sports applications. Accuracy, efficiency, and low memory consumption are desirable for real-time tasks such as intelligent broadcasts and event classification. In this paper, we present a convolutional neural network (CNN) that satisfies all these requirements. The network contains a three-branch proposal network and a four-cascade classification network. Our method first trains these cascaded networks from labeled image patches. Then, we efficiently apply the network to a whole image by using a dilation strategy in testing. We conducted experiments on soccer, basketball, ice hockey and pedestrian datasets. Experimental results demonstrate that our method can accurately detect players under challenging conditions. Compared with CNNs that are adapted from general object detection networks such as Faster-RCNN, our approach achieves state-of-the-art accuracy on three types of games (basketball, soccer and ice hockey) with 1000 × fewer parameters. The generality of our method is also demonstrated on a standard pedestrian detection dataset in which our method achieves competitive performance compared with state-of-the-art methods.", "Abstract Identifying players in soccer videos is a challenging task, especially in overview shots. Face recognition is not feasible due to low resolution, and jersey number recognition suffers from low resolution, motion blur and unsuitable player pose. Therefore, a method to improve visual identification using spatial constellations is proposed here. This method models a spatial constellation as a histogram over relative positions among all players of the team. Using constellation features might increase identification performance but is not expected to work well as a single mean of identification. Thus, this constellation-based recognition is combined with jersey number recognition using convolutional neural networks. Recognizing the numbers on a player’s shirt is the most straight-forward approach, as there is a direct mapping between numbers and players. Using spatial constellation as a feature for identification is based on the assumption that players do not move randomly over the pitch. Players rather follow a tactical role such as central defender, winger, forward, etc. However in soccer, players do not strictly adhere to these roles, variations occur more or less frequently. By learning constellation models for each player, we avoid a categorical assignment of a player to one single tactical role and therefore incorporate each player’s typical behaviour in terms of switching positions. The presented player identification process is expressed as an assignment problem. Here, an optimal assignment of manually labeled trajectories to known player identities is calculated. Using an assignment problem allows for a flexible fusion of constellation features and jersey numbers by combining the respective cost matrices. Evaluation is performed on 14 different shots of six different Bundesliga matches. By combining both modalities, the accuracy is improved from 0.69 to 0.82 when compared with jersey number recognition only." ] }
1907.01058
2955823704
Assigning team labels to players in a sport game is not a trivial task when no prior is known about the visual appearance of each team. Our work builds on a Convolutional Neural Network (CNN) to learn a descriptor, namely a pixel-wise embedding vector, that is similar for pixels depicting players from the same team, and dissimilar when pixels correspond to distinct teams. The advantage of this idea is that no per-game learning is needed, allowing efficient team discrimination as soon as the game starts. In principle, the approach follows the associative embedding framework introduced in arXiv:1611.05424 to differentiate instances of objects. Our work is however different in that it derives the embeddings from a lightweight segmentation network and, more fundamentally, because it considers the assignment of the same embedding to unconnected pixels, as required by pixels of distinct players from the same team. Excellent results, both in terms of team labelling accuracy and generalization to new games arenas, have been achieved on panoramic views of a large variety of basketball games involving players interactions and occlusions. This makes our method a good candidate to integrate team separation in many CNN-based sport analytics pipelines.
Encoder-decoder architectures adopt the encoder structure of classification networks, but replace their dense classification layers by fully convolutional layers that upsample and convolve the coded features up to pixel-wise resolution. SegNet (Segmentation Network) @cite_19 was the first segmentation architecture to reach near real-time inference. It is a symmetrical encoder-decoder network, with skip connection of pooling indices from encoder layers to decoder layers. ENet (Efficient Neural Network) @cite_7 follows SegNet , but comes with various improvements, whose most prominant one is the use of a smaller decoder than the encoder.
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "2963881378", "2419448466" ], "abstract": [ "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .", "The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster." ] }
1907.01058
2955823704
Assigning team labels to players in a sport game is not a trivial task when no prior is known about the visual appearance of each team. Our work builds on a Convolutional Neural Network (CNN) to learn a descriptor, namely a pixel-wise embedding vector, that is similar for pixels depicting players from the same team, and dissimilar when pixels correspond to distinct teams. The advantage of this idea is that no per-game learning is needed, allowing efficient team discrimination as soon as the game starts. In principle, the approach follows the associative embedding framework introduced in arXiv:1611.05424 to differentiate instances of objects. Our work is however different in that it derives the embeddings from a lightweight segmentation network and, more fundamentally, because it considers the assignment of the same embedding to unconnected pixels, as required by pixels of distinct players from the same team. Excellent results, both in terms of team labelling accuracy and generalization to new games arenas, have been achieved on panoramic views of a large variety of basketball games involving players interactions and occlusions. This makes our method a good candidate to integrate team separation in many CNN-based sport analytics pipelines.
Quite recently, several authors proposed to adopt multi-scale architectures to better balance accuracy and inference complexity. Considering multiple scales allows to exploit both a large receptive field and a fine image resolution, with a reduced number of network layers. Among those networks, ICNet (Image Cascade Network) @cite_29 is based on PSPNet (Pyramid Scene Parsing Network) @cite_34 , a state-of-the-art network for non real-time segmentation. ICNet encodes the features at three scales. The coarsest branch is a PSPNet , while finer ones are lighter networks, allowing to infer segmentation in real-time. Two-columns network @cite_2 , BiSeNet (Bilateral Segmentation Network) @cite_0 , GUN (Guided Upsampling Network) @cite_31 and ContextNet @cite_9 are composed of two branches.
{ "cite_N": [ "@cite_29", "@cite_9", "@cite_0", "@cite_2", "@cite_31", "@cite_34" ], "mid": [ "2964217532", "2963800917", "2886934227", "2775208825", "2963372527", "2560023338" ], "abstract": [ "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "", "Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048 ( ) 1024 input, we achieve 68.4 Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.", "We propose an approach to semantic (image) segmentation that reduces the computational costs by a factor of 25 with limited impact on the quality of results. Semantic segmentation has a number of practical applications, and for most such applications the computational costs are critical. The method follows a typical two-column network structure, where one column accepts an input image, while the other accepts a half-resolution version of that image. By identifying specific regions in the full-resolution image that can be safely ignored, as well as carefully tailoring the network structure, we can process approximately 15 highresolution Cityscapes images (1024x2048) per second using a single GTX 980 video card, while achieving a mean intersection-over-union score of 72.9 on the Cityscapes test set.", "", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes." ] }
1907.01058
2955823704
Assigning team labels to players in a sport game is not a trivial task when no prior is known about the visual appearance of each team. Our work builds on a Convolutional Neural Network (CNN) to learn a descriptor, namely a pixel-wise embedding vector, that is similar for pixels depicting players from the same team, and dissimilar when pixels correspond to distinct teams. The advantage of this idea is that no per-game learning is needed, allowing efficient team discrimination as soon as the game starts. In principle, the approach follows the associative embedding framework introduced in arXiv:1611.05424 to differentiate instances of objects. Our work is however different in that it derives the embeddings from a lightweight segmentation network and, more fundamentally, because it considers the assignment of the same embedding to unconnected pixels, as required by pixels of distinct players from the same team. Excellent results, both in terms of team labelling accuracy and generalization to new games arenas, have been achieved on panoramic views of a large variety of basketball games involving players interactions and occlusions. This makes our method a good candidate to integrate team separation in many CNN-based sport analytics pipelines.
@cite_32 , the embedding vector is used to compute the similarity between two pixel neighborhoods from two distinct images, typically to support a tracking task. Interestingly, a proxy task that consists in predicting the (known) color of a target frame based on the color in a reference frame is used to supervise the training of the FCN computing the embeddings. Good embeddings indeed result in relevant pixel associations, and in accurate color predictions. This reveals that a FCN can be trained in an indirect manner to support various higher-level tasks based on richer pixel-wise embedding.
{ "cite_N": [ "@cite_32" ], "mid": [ "2963426332" ], "abstract": [ "We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking." ] }
1907.00829
2955528321
In distributed synthesis, we generate a set of process implementations that, together, accomplish an objective against all possible behaviors of the environment. A lot of recent work has focussed on systems with causal memory, i.e., sets of asynchronous processes that exchange their causal histories upon synchronization. Decidability results for this problem have been stated either in terms of control games, which extend Zielonka's asynchronous automata by partitioning the actions into controllable and uncontrollable, or in terms of Petri games, which extend Petri nets by partitioning the tokens into system and environment players. The precise connection between these two models was so far, however, an open question. In this paper, we provide the first formal connection between control games and Petri games. We establish the equivalence of the two game models based on weak bisimulations between their strategies. For both directions, we show that a game of one type can be translated into an equivalent game of the other type. We provide exponential upper and lower bounds for the translations. Our translations make it possible to transfer and combine decidability results between the two types of games. Exemplarily, we translate decidability in acyclic communication architectures, originally obtained for control games, to Petri games, and decidability in single-process systems, originally obtained for Petri games, to control games.
For control games, there are non-elementary decidability results for restrictions on the dependencies of actions @cite_0 and for acyclic communication architectures @cite_18 @cite_4 . Decidability of process-based control games has also been obtained for restrictions on the synchronization behavior @cite_9 @cite_21 . Recently, results on control games have been unified and extended by a new proof technique for the class of decomposable games @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_21", "@cite_6", "@cite_0" ], "mid": [ "1639817379", "2964248107", "2585454532", "2172209948", "2610229352", "1881260135" ], "abstract": [ "We consider the distributed control problem in the setting of Zielonka asynchronous automata. Such automata are compositions of finite processes communicating via shared actions and evolving asynchronously. Most importantly, processes participating in a shared action can exchange complete information about their causal past. This gives more power to controllers, and avoids simple pathological undecidable cases as in the setting of Pnueli and Rosner. We show the decidability of the control problem for Zielonka automata over acyclic communication architectures. We provide also a matching lower bound, which is l-fold exponential, l being the height of the architecture tree.", "The distributed synthesis problem is about constructing correct distributed systems, i.e., systems that satisfy a given specification. We consider a slightly more general problem of distributed control, where the goal is to restrict the behavior of a given distributed system in order to satisfy the specification. Our systems are finite state machines that communicate via rendez-vous (Zielonka automata). We show decidability of the synthesis problem for all omega-regular local specifications, under the restriction that the communication graph of the system is acyclic. This result extends a previous decidability result for a restricted form of local reachability specifications.", "We study the problem of synthesizing controllers in a natural distributed asynchronous setting: a finite set of plants interact with their local environments and communicate with each other by synchronizing on common actions. The controller-synthesis problem is to come up with a local strategy for each plant such that the controlled behaviour of the network meets a specification. We consider linear time specifications and provide, in some sense, a minimal set of restrictions under which this problem is effectively solvable: we show that the controller-synthesis problem under these restrictions is decidable while the problem becomes undecidable if any one or more of these three restrictions are dropped.", "We identify a network of sequential processes that communicate by synchronizing frequently on common actions. More precisely, we demand that there is a bound k such that if the process p executes k steps without hearing from process q—directly or indirectly—then it will never hear from q again. The non-interleaved branching time behavior of a system of connectedly communicating processes (CCP) is given by its event structure unfolding. We show that the monadic second order (MSO) theory of the event structure unfolding of every CCP is decidable. Using this result, we also show that an associated distributed controller synthesis problem is decidable for linear time specifications that do not discriminate between two different linearizations of the same partially ordered execution.", "The decidability of the distributed version of the Ramadge and Wonham controller synthesis problem, where both the plant and the controllers are modeled as asynchronous automata and the controllers have causal memory is a challenging open problem. There exist three classes of plants for which the existence of a correct controller with causal memory has been shown decidable: when the dependency graph of actions is series-parallel, when the processes are connectedly communicating and when the dependency graph of processes is a tree. We design a class of plants, called decomposable games, with a decidable controller synthesis problem. This provides a unified proof of the three existing decidability results as well as new examples of decidable plants.", "This paper deals with distributed control problems by means of distributed games played on Mazurkiewicz traces. The main difference with other notions of distributed games recently introduced is that, instead of having a local view, strategies and controllers are able to use a more accurate memory, based on their causal view. Our main result states that using the causal view makes the control synthesis problem decidable for series-parallel systems for all recognizable winning conditions on finite behaviors, while this problem with local view was proved undecidable even for reachability conditions." ] }
1907.00749
2954713061
Corner cases are the main bottlenecks when applying Artificial Intelligence (AI) systems to safety-critical applications. An AI system should be intelligent enough to detect such situations so that system developers can prepare for subsequent planning. In this paper, we propose semi-supervised anomaly detection considering the imbalance of normal situations. In particular, driving data consists of multiple positive normal situations (e.g., right turn, going straight), some of which (e.g., U-turn) could be as rare as anomalous situations. Existing machine learning based anomaly detection approaches do not fare sufficiently well when applied to such imbalanced data. In this paper, we present a novel multi-task learning based approach that leverages domain-knowledge (maneuver labels) for anomaly detection in driving data. We evaluate the proposed approach both quantitatively and qualitatively on 150 hours of real-world driving data and show improved performance over baseline approaches.
In ensemble learning, different models are trained on the same data (or random sets of samples from the original data) and a majority voting (or another fusion technique) is used to decide the final output. Another advantage of ensemble learning is that the member models are chosen such that they are complementary to each other in terms of their strengths weaknesses, i.e., the weaknesses of one are compensated by the strengths of the other. For example, @cite_4 , proposed an ensemble based collective and contextual anomaly detection framework. The ensemble consisted of pattern recognition algorithms such as Autoencoder and PCA, as well as prediction based anomaly detectors such as Support Vector Regression (SVR) and Random Forest. They showed that the ensemble classifier is able to perform well compared to the base classifiers.
{ "cite_N": [ "@cite_4" ], "mid": [ "2594434602" ], "abstract": [ "Abstract During building operation, a significant amount of energy is wasted due to equipment and human-related faults. To reduce waste, today's smart buildings monitor energy usage with the aim of identifying abnormal consumption behaviour and notifying the building manager to implement appropriate energy-saving procedures. To this end, this research proposes a new pattern-based anomaly classifier, the collective contextual anomaly detection using sliding window (CCAD-SW) framework. The CCAD-SW framework identifies anomalous consumption patterns using overlapping sliding windows. To enhance the anomaly detection capacity of the CCAD-SW, this research also proposes the ensemble anomaly detection (EAD) framework. The EAD is a generic framework that combines several anomaly detection classifiers using majority voting. To ensure diversity of anomaly classifiers, the EAD is implemented by combining pattern-based (e.g., CCAD-SW) and prediction-based anomaly classifiers. The research was evaluated using real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. Results show that the EAD framework improved the sensitivity of the CCAD-SW by 3.6 and reduced false alarm rate by 2.7 ." ] }
1907.00924
2943539461
In the context of deep learning, the costliest phase from a computational point of view is the full training of the learning algorithm. However, this process is to be used a significant number of times during the design of a new artificial neural network, leading therefore to extremely expensive operations. Here, we propose a low-cost strategy to predict the accuracy of the algorithm, based only on its initial behaviour. To do so, we train the network of interest up to convergence several times, modifying its characteristics at each training. The initial and final accuracies observed during this beforehand process are stored in a database. We then make use of both curve fitting and Support Vector Machines techniques, the latter being trained on the created database, to predict the accuracy of the network, given its accuracy on the primary iterations of its learning. This approach can be of particular interest when the space of the characteristics of the network is notably large or when its full training is highly time-consuming. The results we obtained are promising and encouraged us to apply this strategy to a topical issue: hyper-parameter optimisation (HO). In particular, we focused on the HO of a convolutional neural network for the classification of the databases MNIST and CIFAR-10. By using our method of prediction, and an algorithm implemented by us for a probabilistic exploration of the hyper-parameter space, we were able to find the hyper-parameter settings corresponding to the optimal accuracies already known in literature, at a quite low-cost.
Two of the most intuitive and widely spread methods are the grid search and the random search @cite_1 . These techniques however are not well-suited in applications where a given set of hyper-parameters is costly to evaluate. As a result, Sequential Model-Based Optimisation (SMBO) @cite_8 algorithms have been employed in many settings when the performance evaluation of a model is expensive. They approximate the black-box objective function @math that is to be maximised by a surrogate function, cheaper to evaluate. At each iteration of the algorithm, the new point where the surrogate is to be evaluated is chosen by maximising a chosen criterion. Several SMBO algorithms have been proposed in the literature, and differ in the criteria by which they optimise the surrogate, and in the way they model the surrogate given the observation history. Two of the most famous SMBO approaches are the Bayesian Optimisation approach @cite_9 @cite_10 and the Tree-structured Parzen Estimator strategy @cite_2 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_2", "@cite_10" ], "mid": [ "60686164", "2192203593", "2097998348", "", "2151238122" ], "abstract": [ "State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach.", "Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.", "Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms.", "", "This paper presents a taxonomy of existing approaches for using response surfaces for global optimization. Each method is illustrated with a simple numerical example that brings out its advantages and disadvantages. The central theme is that methods that seem quite reasonable often have non-obvious failure modes. Understanding these failure modes is essential for the development of practical algorithms that fulfill the intuitive promise of the response surface approach." ] }
1907.00924
2943539461
In the context of deep learning, the costliest phase from a computational point of view is the full training of the learning algorithm. However, this process is to be used a significant number of times during the design of a new artificial neural network, leading therefore to extremely expensive operations. Here, we propose a low-cost strategy to predict the accuracy of the algorithm, based only on its initial behaviour. To do so, we train the network of interest up to convergence several times, modifying its characteristics at each training. The initial and final accuracies observed during this beforehand process are stored in a database. We then make use of both curve fitting and Support Vector Machines techniques, the latter being trained on the created database, to predict the accuracy of the network, given its accuracy on the primary iterations of its learning. This approach can be of particular interest when the space of the characteristics of the network is notably large or when its full training is highly time-consuming. The results we obtained are promising and encouraged us to apply this strategy to a topical issue: hyper-parameter optimisation (HO). In particular, we focused on the HO of a convolutional neural network for the classification of the databases MNIST and CIFAR-10. By using our method of prediction, and an algorithm implemented by us for a probabilistic exploration of the hyper-parameter space, we were able to find the hyper-parameter settings corresponding to the optimal accuracies already known in literature, at a quite low-cost.
More recently, new hyper-parameter optimisation methods based on reinforcement learning have emerged @cite_4 @cite_3 @cite_7 @cite_5 . The goal for most of them was to find the neural network (NN) or CNN architectures that are likely to yield to an optimised performance. Thus, they were seeking the appropriate architectural hyper-parameters, as the number of layers or the structure of each convolutional layer, but many other hyper-parameters such as the learning rate and regularisation parameters are manually chosen in the end. In any case, while all the above-mentioned strategies aim to evaluate the expensive objective function @math (which, in the case of a NN or CNN is the prediction accuracy) as seldom as possible, to the best of our knowledge very few algorithms offer a method to reduce the evaluation cost of @math .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_7", "@cite_3" ], "mid": [ "2963536136", "2553303224", "2796265726", "2556833785" ], "abstract": [ "", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks." ] }
1907.00831
2953586371
In online multiple pedestrian tracking it is of great importance to construct reliable cost matrix for assigning observations to tracks. Each element of cost matrix is constructed by using similarity measure. Many previous works have proposed their own similarity calculation methods consisting of geometric model (e.g. bounding box coordinates) and appearance model. In particular, appearance model contains information with higher dimension compared to geometric model. Thanks to the recent success of deep learning based methods, handling of high dimensional appearance information becomes possible. Among many deep networks, a siamese network with triplet loss is popularly adopted as an appearance feature extractor. Since the siamese network can extract features of each input independently, it is possible to adaptively model tracks (e.g. linear update). However, it is not suitable for multi-object setting that requires comparison with other inputs. In this paper we propose a novel track appearance modeling based on joint inference network to address this issue. The proposed method enables comparison of two inputs to be used for adaptive appearance modeling. It contributes to disambiguating target-observation matching and consolidating the identity consistency. Intensive experimental results support effectiveness of our method.
There have been many existing works on appearance modeling for multi-target tracking. Most of these works suggest extraction of target-specific features from cropped RGB images. In this manner, many hand-crafted features were proposed such as color histogram @cite_42 @cite_50 , optical flow @cite_23 , @cite_8 and histogram of gradients @cite_30 to name a few. However, performance of those trackers is still limited. Since deep learning was introduced in computer vision, there are several online and offline multi-target trackers that adopts deep learning for appearance modeling. @cite_13 extracted appearance features through a siamese network and associated those features using LSTM. Then, solved the tracking problem in MHT (Multiple Hypothesis Tracking) framework. @cite_45 used the siamese network with a triplet loss for appearance modeling and adaptively trained the network during tracking. @cite_7 extended the triplet loss to quadruplet loss with additional margin parameters. It is undeniable that deep architecture brought improvement in tracking performance. However, target-specific feature based methods has a weakness when taking noisy inputs. It cannot consider a counterpart to be compared.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_8", "@cite_42", "@cite_45", "@cite_23", "@cite_50", "@cite_13" ], "mid": [ "2474389331", "2749203358", "2784549149", "2155243759", "2604679602", "1531192956", "", "2895071559" ], "abstract": [ "In this paper, we investigate two new strategies to detect objects accurately and efficiently using deep convolutional neural network: 1) scale-dependent pooling and 2) layerwise cascaded rejection classifiers. The scale-dependent pooling (SDP) improves detection accuracy by exploiting appropriate convolutional features depending on the scale of candidate object proposals. The cascaded rejection classifiers (CRC) effectively utilize convolutional features and eliminate negative object proposals in a cascaded manner, which greatly speeds up the detection while maintaining high accuracy. In combination of the two, our method achieves significantly better accuracy compared to other state-of-the-arts in three challenging datasets, PASCAL object detection challenge, KITTI object detection benchmark and newly collected Inner-city dataset, while being more efficient.", "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "We propose an online multiple object tracking algorithm that exploits optical flow and convolutional features to handle noisy detections as well as frequent occlusion. To achieve robust tracking, we develop a data association method that deals with tracking scenarios of increasing difficulty. For easy scenarios, we use motion affinity to associate detections with objects. For ambiguous situations, we propose to use an appearance model based on convolutional features and correlation filters to complement template matching methods. For difficult cases where objects are under heavy occlusion, we carry out occlusion analysis, which exploits the relationship between targets and occluders to predict potential object locations. To deal with noisy detections, false positives are detected and removed on both raw detection and tracklet levels, while missing and inaccurate detections are recovered or corrected via short-term tracking. Experimental results on two benchmark datasets demonstrate that the proposed online algorithm performs favorably against the state-of-the-art methods.", "In this paper, we introduce a novel real-time tracker based on color, texture and motion information. RGB color histogram and correlogram (autocorrelogram) are exploited as color cues and texture properties are represented by local binary patterns (LBP). Object's motion is taken into account through location and trajectory. After extraction, these features are used to build a unifying distance measure. The measure is utilized in tracking and in the classification event, in which an object is leaving a group. The initial object detection is done by a texture-based background subtraction algorithm. The experiments on indoor and outdoor surveillance videos show that a unified system works better than the versions based on single features. It also copes well with low illumination conditions and low frame rates which are common in large scale surveillance systems.", "Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.", "In this paper, we tackle two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As for the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As for another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI [16] and MOT [2] datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10 higher MOTA) over the state-of-the-art.", "", "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks." ] }
1907.00831
2953586371
In online multiple pedestrian tracking it is of great importance to construct reliable cost matrix for assigning observations to tracks. Each element of cost matrix is constructed by using similarity measure. Many previous works have proposed their own similarity calculation methods consisting of geometric model (e.g. bounding box coordinates) and appearance model. In particular, appearance model contains information with higher dimension compared to geometric model. Thanks to the recent success of deep learning based methods, handling of high dimensional appearance information becomes possible. Among many deep networks, a siamese network with triplet loss is popularly adopted as an appearance feature extractor. Since the siamese network can extract features of each input independently, it is possible to adaptively model tracks (e.g. linear update). However, it is not suitable for multi-object setting that requires comparison with other inputs. In this paper we propose a novel track appearance modeling based on joint inference network to address this issue. The proposed method enables comparison of two inputs to be used for adaptive appearance modeling. It contributes to disambiguating target-observation matching and consolidating the identity consistency. Intensive experimental results support effectiveness of our method.
A Joint-Inference Netwirk (JI-Net) was proposed to address aforementioned issues and has been adopted in offline multi-target tracking problems. @cite_0 used the JI-Net to extract appearance similarity feature. It fuses the appearance feature with geometric information using gradient boosting algorithm and solves a global optimization through the linear programming. @cite_46 additionally concatenated pose information to an input of JI-Net. Output similarity is used to edge cost for global multi-cut problem. Although it shows effective performance in offline framework, it is not suitable to online tracking due to the absence of target-specific features. In this paper we provide ways to apply the JI-Net into the online multi-target tracking framework.
{ "cite_N": [ "@cite_0", "@cite_46" ], "mid": [ "2964015640", "2739491435" ], "abstract": [ "This paper introduces a novel approach to the task of data association within the context of pedestrian tracking, by introducing a two-stage learning scheme to match pairs of detections. First, a Siamese convolutional neural network (CNN) is trained to learn descriptors encoding local spatio-temporal structures between the two input image patches, aggregating pixel values and optical flow information. Second, a set of contextual features derived from the position and size of the compared input patches are combined with the CNN output by means of a gradient boosting classifier to generate the final matching probability. This learning approach is validated by using a linear programming based multi-person tracker showing that even a simple and efficient tracker may outperform much more complex models when fed with our learned matching probabilities. Results on publicly available sequences show that our method meets state-of-the-art standards in multiple people tracking.", "Tracking multiple persons in a monocular video of a crowded scene is a challenging task. Humans can master it even if they loose track of a person locally by re-identifying the same person based on their appearance. Care must be taken across long distances, as similar-looking persons need not be identical. In this work, we propose a novel graph-based formulation that links and clusters person hypotheses over time by solving an instance of a minimum cost lifted multicut problem. Our model generalizes previous works by introducing a mechanism for adding long-range attractive connections between nodes in the graph without modifying the original set of feasible solutions. This allows us to reward tracks that assign detections of similar appearance to the same person in a way that does not introduce implausible solutions. To effectively match hypotheses over longer temporal gaps we develop new deep architectures for re-identification of people. They combine holistic representations extracted with deep networks and body pose layout obtained with a state-of-the-art pose estimation model. We demonstrate the effectiveness of our formulation by reporting a new state-of-the-art for the MOT16 benchmark. The code and pre-trained models are publicly available." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
The underwater perception literature contains many techniques to detect objects in sonar images. A very popular option is the use of template matching @cite_19 , where a set of templates is cross-correlated with the input image, and this produces maximum correlation where the object is located. A threshold is usually set to avoid false positives.
{ "cite_N": [ "@cite_19" ], "mid": [ "2083769217" ], "abstract": [ "Underwater chain cleaning and inspection tasks are costly and time consuming operations that must be performed periodically to guarantee the safety of the moorings. We propose a framework towards an efficient and cost-effective solution by using an autonomous underwater vehicle equipped with a forward-looking sonar. As a first step, we tackle the problem of individual chain link detection from the challenging forward-looking sonar data. To cope with occlusions and intensity variations due to viewpoint changes, the recognition problem is addressed as local pattern matching of the different link parts. We exploit the high frame-rate of the sonar to improve, by registration, the signal-to-noise ratio of the individual sonar frames and to cluster the local detections over time to increase robustness. Experiments with sonar images of a real chain are reported, showing a high percentage of correct link detections with good accuracy while potentially keeping real-time capabilities." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
Another option is using classic computer vision methods, like the boosted cascade of weak classifiers @cite_10 , but this only works well in objects that produce large sonar shadows, as Haar features correlate very well with this feature.
{ "cite_N": [ "@cite_10" ], "mid": [ "2607190376" ], "abstract": [ "Detection of underwater objects is a critical task for a variety of underwater applications (off-shore, archeology, marine science, mine detection). This task is traditionally carried out by a skilled human operator. However, with the appearance of Autonomous Underwater Vehicles, automated processing is now needed to tackle the large amount of data produced and to enable on the fly adaptation of the missions and near real time update of the operator. In this paper we propose a new method for object detection in sonar imagery capable of processing images extremely rapidly based on the Viola and Jones boosted classifiers cascade. Unlike most previously proposed approaches based on a model of the target, our method is based on in-situ learning of the target responses and of the local clutter. Learning the clutter is vitally important in complex terrains to obtain low false alarm rates while achieving high detection accuracy. Results obtained on real and synthetic images on a variety of challenging terrains are presented to show the discriminative power of such an approach." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
The concept of detection proposals is introduced in the computer vision literature @cite_18 , where instead of using an expensive sliding window to detect objects, the detection process can be "guided" by a subset of windows that are likely to contain objects. A detection proposals algorithm infers these bounding boxes (also called proposal) from image content. Proposals are also linked to the concept of "objectness" @cite_0 , where the authors define it as "quantifying how likely it is for an image window to contain an object of any class". A set of predefined cues are combined in order to produce objectness, which can be used to discriminate between object and background windows.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "2066624635", "1555385401" ], "abstract": [ "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on BSDS and PASCAL VOC 2008 demonstrate our ability to find most objects within a small bag of proposed regions." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
EdgeBoxes @cite_13 is a proposals technique that uses a structured edge detector, which extracts high quality edges. Edges are then grouped to produce object proposals that can be scored by predefined techniques. This method is very fast but needs a large amount of proposals to produce high recall. Selective Search @cite_12 takes a different approach, by doing super-pixel segmentation and using a set of strategies to merge super-pixels into detection proposals. It is quite slow but it can achieve very high recall with a medium number of output proposals.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "7746136", "2088049833" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
Neural networks have also been used to model detection proposals. The best teachnique is the Region Proposal Network from the Faster R-CNN object detection framework @cite_3 . The RPN module regresses bounding box coordinates and outputs a binary decision corresponding to object vs background. The RPN works quite well on color images and improves the state of the art in the PASCAL VOC 2007 2012 datasets, but we have not been able to train such modules for proposals on sonar images, mostly likely due to the small scale datasets that we have.
{ "cite_N": [ "@cite_3" ], "mid": [ "2613718673" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1907.00734
2955302411
Detecting novel objects without class information is not trivial, as it is difficult to generalize from a small training set. This is an interesting problem for underwater robotics, as modeling marine objects is inherently more difficult in sonar images, and training data might not be available apriori. Detection proposals algorithms can be used for this purpose but usually requires a large amount of output bounding boxes. In this paper we propose the use of a fully convolutional neural network that regresses an objectness value directly from a Forward-Looking sonar image. By ranking objectness, we can produce high recall (96 ) with only 100 proposals per image. In comparison, EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 , while Selective Search requires 2000 proposals to achieve 95 recall. We also show that our method outperforms a template matching baseline by a considerable margin, and is able to generalize to completely new objects. We expect that this kind of technique can be used in the field to find lost objects under the sea.
While there are established techniques for detection proposals in color images, these are not directly transferrable to sonar images. Bounding box regression techniques cannot be trained unless a large dataset ( @math 1M images) is available for pre-training. The typical dataset of sonar images ranges in the thousands, preventing the use of such techniques. We have developed a simple objectness regressor @cite_8 using neural networks that works well for detection proposals, but it is computationally expensive as features are not shared across neural network evaluations, and simple thresholding of objectness values might not generalize well across environments.
{ "cite_N": [ "@cite_8" ], "mid": [ "2512422503" ], "abstract": [ "Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection." ] }
1907.01030
2956159074
LSTM based language models are an important part of modern LVCSR systems as they significantly improve performance over traditional backoff language models. Incorporating them efficiently into decoding has been notoriously difficult. In this paper we present an approach based on a combination of one-pass decoding and lattice rescoring. We perform decoding with the LSTM-LM in the first pass but recombine hypothesis that share the last two words, afterwards we rescore the resulting lattice. We run our systems on GPGPU equipped machines and are able to produce competitive results on the Hub5'00 and Librispeech evaluation corpora with a runtime better than real-time. In addition we shortly investigate the possibility to carry out the full sum over all state-sequences belonging to a given word-hypothesis during decoding without recombination.
Early approaches of introducing NN-LMs into decoding include some form of conversion to a more traditional backoff LM: A very straightforward approach to convert complex models is to sample them to create large training corpora on which back-off LMs can be trained on. This is the approach of @cite_20 . In @cite_28 the continues states of an RNN-LM are discretized to create a weighted finite state transducer. The authors of @cite_27 trained feed-forward LMs for different orders and extracted the probabilities for the backoff LM directly from the neural network. @cite_8 compares different techniques for conversion and @cite_29 uses these techniques to investigate conversion of domain adapted LSTM LMs.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_29", "@cite_27", "@cite_20" ], "mid": [ "2295078202", "1585876329", "2748092010", "2037942319", "2110415041" ], "abstract": [ "In this paper, we investigate and compare three different possibilities to convert recurrent neural network language models (RNNLMs) into backoff language models (BNLM). While RNNLMs often outperform traditional n-gram approaches in the task of language modeling, their computational demands make them unsuitable for an efficient usage during decoding in an LVCSR system. It is, therefore, of interest to convert them into BNLMs in order to integrate their information into the decoding process. This paper compares three different approaches: a text based conversion, a probability based conversion and an iterative conversion. The resulting language models are evaluated in terms of perplexity and mixed error rate in the context of the Code-Switching data corpus SEAME. Although the best results are obtained by combining the results of all three approaches, the text based conversion approach alone leads to significant improvements on the SEAME corpus as well while offering the highest computational efficiency. In total, the perplexity can be reduced by 11.4 relative on the evaluation set and the mixed error rate by 3.0 relative on the same data set.", "Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-gram language models (LMs). However, in automatic speech recognition (ASR), RNNLMs were not yet used to directly decode a speech signal. Instead, RNNLMs are rather applied to rescore N-best lists generated from word lattices. To use RNNLMs in earlier stages of the speech recognition, our work proposes to transform RNNLMs into weighted finite state transducers approximating their underlying probability distribution. While the main idea consists in discretizing continuous representations of word histories, we present a first implementation of the approach using clustering techniques and entropy-based pruning. Achieved experimental results on LM perplexity and on ASR word error rates are encouraging since the performance of the discretized RNNLMs is comparable to the one of n-gram LMs.", "", "Neural network language models (NNLMs) have achieved very good performance in large-vocabulary continuous speech recognition (LVCSR) systems. Because decoding with NNLMs is computationally expensive, there is interest in developing methods to approximate NNLMs with simpler language models that are suitable for fast decoding. In this work, we propose an approximate method for converting a feedforward NNLM into a back-off n-gram language model that can be used directly in existing LVCSR decoders. We convert NNLMs of increasing order to pruned back-off language models, using lower-order models to constrain the n-grams allowed in higher-order models. In experiments on Broadcast News data, we find that the resulting back-off models retain the bulk of the gain achieved by NNLMs over conventional n-gram language models, and give accuracy improvements as compared to existing methods for converting NNLMs to back-off models. In addition, the proposed approach can be applied to any type of non-back-off language model to enable efficient decoding.", "Long-span language models that capture syntax and semantics are seldom used in the first pass of large vocabulary continuous speech recognition systems due to the prohibitive search-space of sentence-hypotheses. Instead, an N-best list of hypotheses is created using tractable n-gram models, and rescored using the long-span models. It is shown in this paper that computationally tractable variational approximations of the long-span models are a better choice than standard n-gram models for first pass decoding. They not only result in a better first pass output, but also produce a lattice with a lower oracle word error rate, and rescoring the N-best list from such lattices with the long-span models requires a smaller N to attain the same accuracy. Empirical results on the WSJ, MIT Lectures, NIST 2007 Meeting Recognition and NIST 2001 Conversational Telephone Recognition data sets are presented to support these claims." ] }
1907.01030
2956159074
LSTM based language models are an important part of modern LVCSR systems as they significantly improve performance over traditional backoff language models. Incorporating them efficiently into decoding has been notoriously difficult. In this paper we present an approach based on a combination of one-pass decoding and lattice rescoring. We perform decoding with the LSTM-LM in the first pass but recombine hypothesis that share the last two words, afterwards we rescore the resulting lattice. We run our systems on GPGPU equipped machines and are able to produce competitive results on the Hub5'00 and Librispeech evaluation corpora with a runtime better than real-time. In addition we shortly investigate the possibility to carry out the full sum over all state-sequences belonging to a given word-hypothesis during decoding without recombination.
Closest to the work presented in this paper there are also publications where the authors integrated LSTM-LMs into first pass decoding: In @cite_21 a set of caches was introduced to minimize unnecessary computations when evaluating the LSTM-LM. In @cite_12 , an on-the-fly rescoring approach to integrate LSTM-LMs into 1-st pass decoding is presented. The authors of @cite_16 use a hybrid CPU GPGPU architecture for real time decoding. The HCL transducer is composed with a small n-gram model and is expanded on the GPU while rescoring with an LSTM LM happens on CPU. Caching of previous outputs enables real-time decoding. All three papers use hierarchical softmax word classes to reduce the number of computations in the output layer @cite_26 and with the exception of @cite_21 interpolate the LSTM-LM with a Max-Entropy LM @cite_10 . The works of @cite_16 are extended in @cite_15 . The LSTM Units are replaced with GRUs, NCE replaces the hierarchical softmax and GRU states are quantized to reduce the number of necessary computations.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_15", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "36903255", "2091981305", "2964191536", "2394835536", "1965154800", "2066378046" ], "abstract": [ "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.", "Recurrent neural network language models (RNNLMs) have recently produced improvements on language processing tasks ranging from machine translation to word tagging and speech recognition. To date, however, the computational expense of RNNLMs has hampered their application to first pass decoding. In this paper, we show that by restricting the RNNLM calls to those words that receive a reasonable score according to a n-gram model, and by deploying a set of caches, we can reduce the cost of using an RNNLM in the first pass to that of using an additional n-gram model. We compare this scheme to lattice rescoring, and find that they produce comparable results for a Bing Voice search task. The best performance results from rescoring a lattice that is itself created with a RNNLM in the first pass.", "This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries. Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform. The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy. The performance of the proposed methods evaluated on LibriSpeech 1 test sets indicates that the reduction in history vector precision improves the average recognition speed by 1.23 times with minimum degradation in accuracy. On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based real-time recognition with a four times improvement in speed.", "", "We describe how to effectively train neural network based language models on large data sets. Fast convergence during training and better overall performance is observed when the training data are sorted by their relevance. We introduce hash-based implementation of a maximum entropy model, that can be trained as a part of the neural network model. This leads to significant reduction of computational complexity. We achieved around 10 relative reduction of word error rate on English Broadcast News speech recognition task, against large 4-gram model trained on 400M tokens.", "This paper proposes an efficient one-pass decoding method for realtime speech recognition employing a recurrent neural network language model (RNNLM). An RNNLM is an effective language model that yields a large gain in recognition accuracy when it is combined with a standard n -gram model. However, since every word probability distribution based on an RNNLM is dependent on the entire history from the beginning of the speech, the search space in Viterbi decoding grows exponentially with the length of the recognition hypothesesand makescomputation prohibitivelyexpensive. Therefore, an RNNLM is usually used by N -best rescoring or by approximating it to a back-off n -gram model. In this paper, we present another approach that enables one-pass Viterbi decoding with an RNNLM without approximation, where the RNNLM is represented as a prefix tree of possible word sequences, and only the part needed for decoding is generated on-the-fly and used to rescore each hypothesis using an on-the-fly composition technique we previously proposed. Experimental results on the MIT lecture transcription task show that our proposed method enables one-pass decoding with small overhead for the RNNLM and achieves a slightly higher accuracy than 1000-best rescoring. Furthermore, it reduces the latency from the end of each utterance in two-pass decoding by a factor of 10." ] }
1907.00787
2970116173
This paper presents a novel CNN-based approach for synthesizing high-resolution LiDAR point cloud data. Our approach generates semantically and perceptually realistic results with guidance from specialized loss-functions. First, we utilize a modified per-point loss that addresses missing LiDAR point measurements. Second, we align the quality of our generated output with real-world sensor data by applying a perceptual loss. In large-scale experiments on real-world datasets, we evaluate both the geometric accuracy and semantic segmentation performance using our generated data vs. ground truth. In a mean opinion score testing we further assess the perceptual quality of our generated point clouds. Our results demonstrate a significant quantitative and qualitative improvement in both geometry and semantics over traditional non CNN-based upsampling methods.
The aim of up-sampling is to estimate the high-resolution visual output of a corresponding low-resolution input. In this work we consider cylindrical two dimensional projections of structured LiDAR point clouds, therefore it is vital to also take into account analogous approaches on RGB images to solve this task. A sizable amount of literature exists on RGB image up-sampling. We focus on what we consider most relevant to this paper. present a comprehensive evaluation of prevailing RGB up-sampling techniques prior to the adoption of convolutional neural networks @cite_14 . More advanced techniques, such as SRCNN @cite_5 , outperform these traditional methods. However, they cannot cope with data that features missing measurements, since dense input representations are required. The traditional methods, on the other hand, can easily be applied to cylindrical LiDAR projections. Due to their low computational complexity they can be used for real-time applications. However, the traditional re-sampling techniques are not able to restore the high-frequency information, i.e. fine details in the resized input, due to the low-pass behavior of the interpolation filters @cite_11 .
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_11" ], "mid": [ "1885185971", "7682646", "2520322935" ], "abstract": [ "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "Single-image super-resolution is of great importance for vision applications, and numerous algorithms have been proposed in recent years. Despite the demonstrated success, these results are often generated based on different assumptions using different datasets and metrics. In this paper, we present a systematic benchmark evaluation for state-of-the-art single-image super-resolution algorithms. In addition to quantitative evaluations based on conventional full-reference metrics, human subject studies are carried out to evaluate image quality based on visual perception. The benchmark evaluations demonstrate the performance and limitations of state-of-the-art algorithms which sheds light on future research in single-image super-resolution.", "Depth boundaries often lose sharpness when upsampling from low-resolution (LR) depth maps especially at large upscaling factors. We present a new method to address the problem of depth map super resolution in which a high-resolution (HR) depth map is inferred from a LR depth map and an additional HR intensity image of the same scene. We propose a Multi-Scale Guided convolutional network (MSG-Net) for depth map super resolution. MSG-Net complements LR depth features with HR intensity features using a multi-scale fusion strategy. Such a multi-scale guidance allows the network to better adapt for upsampling of both fine- and large-scale structures. Specifically, the rich hierarchical HR intensity features at different levels progressively resolve ambiguity in depth map upsampling. Moreover, we employ a high-frequency domain training method to not only reduce training time but also facilitate the fusion of depth and intensity features. With the multi-scale guidance, MSG-Net achieves state-of-art performance for depth map upsampling." ] }
1907.00787
2970116173
This paper presents a novel CNN-based approach for synthesizing high-resolution LiDAR point cloud data. Our approach generates semantically and perceptually realistic results with guidance from specialized loss-functions. First, we utilize a modified per-point loss that addresses missing LiDAR point measurements. Second, we align the quality of our generated output with real-world sensor data by applying a perceptual loss. In large-scale experiments on real-world datasets, we evaluate both the geometric accuracy and semantic segmentation performance using our generated data vs. ground truth. In a mean opinion score testing we further assess the perceptual quality of our generated point clouds. Our results demonstrate a significant quantitative and qualitative improvement in both geometry and semantics over traditional non CNN-based upsampling methods.
recently proposed PU-Net which directly operates on three dimensional point clouds @cite_13 . The up-sampling network learns multilevel features per point and expands the point set via multi-branch convolution units. The expanded feature is then split into a multitude of features, which are then reconstructed to an up-sampled point set. The point set is unordered and forms a generic point cloud. However, for our application, it is important to maintain the ordered point cloud structure provided by LiDAR sensors. First, we are able to apply downstream perceptual algorithms which have been designed for the structured low-resolution data. Second, it is possible to re-use valuable data recordings by up-sampling it to higher resolutions especially when new LiDAR sensors with more layers are introduced to the market. Or in the case when algorithms, like semantic segmentation @cite_21 or stixels @cite_3 , have to be adapted to the higher resolution.
{ "cite_N": [ "@cite_21", "@cite_13", "@cite_3" ], "mid": [ "2799211199", "2963680153", "2953373759" ], "abstract": [ "Mobile robots and autonomous vehicles rely on multi-modal sensor setups to perceive and understand their surroundings. Aside from cameras, LiDAR sensors represent a central component of state-of-the-art perception systems. In addition to accurate spatial perception, a comprehensive semantic understanding of the environment is essential for efficient and safe operation. In this paper we present a novel deep neural network architecture called LiLaNet for point-wise, multi-class semantic labeling of semi-dense LiDAR data. The network utilizes virtual image projections of the 3D point clouds for efficient inference. Further, we propose an automated process for large-scale cross-modal training data generation called Autolabeling, in order to boost semantic labeling performance while keeping the manual annotation effort low. The effectiveness of the proposed network architecture as well as the automated data generation process is demonstrated on a manually annotated ground truth dataset. LiLaNet is shown to significantly outperform current state-of-the-art CNN architectures for LiDAR data. Applying our automatically generated large-scale training data yields a boost of up to 14 percentage points compared to networks trained on manually annotated data only.", "Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces.", "This paper presents a compact and accurate representation of 3D scenes that are observed by a LiDAR sensor and a monocular camera. The proposed method is based on the well-established Stixel model originally developed for stereo vision applications. We extend this Stixel concept to incorporate data from multiple sensor modalities. The resulting mid-level fusion scheme takes full advantage of the geometric accuracy of LiDAR measurements as well as the high resolution and semantic detail of RGB images. The obtained environment model provides a geometrically and semantically consistent representation of the 3D scene at a significantly reduced amount of data while minimizing information loss at the same time. Since the different sensor modalities are considered as input to a joint optimization problem, the solution is obtained with only minor computational overhead. We demonstrate the effectiveness of the proposed multimodal Stixel algorithm on a manually annotated ground truth dataset. Our results indicate that the proposed mid-level fusion of LiDAR and camera data improves both the geometric and semantic accuracy of the Stixel model significantly while reducing the computational overhead as well as the amount of generated data in comparison to using a single modality on its own." ] }
1812.01129
2903077796
An agent with an inaccurate model of its environment faces a difficult choice: it can ignore the errors in its model and act in the real world in whatever way it determines is optimal with respect to its model. Alternatively, it can take a more conservative stance and eschew its model in favor of optimizing its behavior solely via real-world interaction. This latter approach can be exceedingly slow to learn from experience, while the former can lead to "planner overfitting" - aspects of the agent's behavior are optimized to exploit errors in its model. This paper explores an intermediate position in which the planner seeks to avoid overfitting through a kind of regularization of the plans it considers. We present three different approaches that demonstrably mitigate planner overfitting in reinforcement-learning environments.
Kolter2009a applied regularization techniques to LSTD @cite_9 with an algorithm they called LARS-TD. In particular, they argued that, without regularization, LSTD's performance depends heavily on the number of basis functions chosen and the size of the data set collected. If the data set is too small, the technique is prone to overfitting. They showed that @math and @math regularization yield a procedure that inherits the benefits of selecting good features while making it possible to compute the fixed point. Later work by liu2012regularized built on this work with the algorithm RO-TD, an @math regularized off policy Temporal Difference Learning method. Johns2010 cast the @math regularized fixed-point computation as a linear complementarity problem, which provides stronger solution-uniqueness guarantees than those provided for LARS-TD. petrik2010feature examined the approximate linear programming (ALP) framework for finding approximated value functions in large MDPs. They showed the benefits of adding an @math regularization constraint to the ALP that increases the error bound at training time and helps fight overfitting.
{ "cite_N": [ "@cite_9" ], "mid": [ "2072931156" ], "abstract": [ "We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton‘s TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, σTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on σTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters." ] }
1812.01129
2903077796
An agent with an inaccurate model of its environment faces a difficult choice: it can ignore the errors in its model and act in the real world in whatever way it determines is optimal with respect to its model. Alternatively, it can take a more conservative stance and eschew its model in favor of optimizing its behavior solely via real-world interaction. This latter approach can be exceedingly slow to learn from experience, while the former can lead to "planner overfitting" - aspects of the agent's behavior are optimized to exploit errors in its model. This paper explores an intermediate position in which the planner seeks to avoid overfitting through a kind of regularization of the plans it considers. We present three different approaches that demonstrably mitigate planner overfitting in reinforcement-learning environments.
Farahmand2008 and Farahmand2009a focused on regularization applied to Policy Iteration and Fitted @math -Iteration (FQI) @cite_19 and developed two related methods for Regularized Policy Iteration, each leveraging @math regularization during the evaluation of policies for each iteration. The first method adds a regularization term to the Least Squares Temporal Difference (LSTD) error @cite_9 , while the second adds a similar term to the optimization of Bellman residual minimization @cite_7 @cite_14 @cite_17 with regularization @cite_16 . Their main result shows finite convergence for the @math function under the approximated policy and the true optimal policy. A method for FQI adds a regularization cost to the least squares regression of the @math function. Follow up work @cite_10 expanded Regularized Fitted @math -Iteration to planning. That is, given a data set @math and a function family @math (like regression trees), FQI approximates a @math function through repeated iterations of the following regression problem: where @math imposes a regularization penalty term and @math is a regularization coefficient. They prove bounds relating this regularization cost to the approximation error in @math between iterations of FQI.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_19", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2040766536", "1646707810", "2072931156", "", "2158883409", "2124595631", "157380174" ], "abstract": [ "Abstract Fitting the value function in a Markovian decision process by a linear superposition of M basis functions reduces the problem dimensionality from the number of states down to M , with good accuracy retained if the value function is a smooth function of its argument, the state vector. This paper provides, for both the discounted and undiscounted cases, three algorithms for computing the coefficients in the linear superposition: linear programming, policy iteration, and least squares.", "ABSTRACT A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties.", "We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton‘s TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, σTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on σTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters.", "", "We consider the problem of on-line value function estimation in reinforcement learning. We concentrate on the function approximator to use. To try to break the curse of dimensionality, we focus on non parametric function approximators. We propose to fit the use of kernels into the temporal difference algorithms by using regression via the LASSO. We introduce the equi-gradient descent algorithm (EGD) which is a direct adaptation of the one recently introduced in the LARS algorithm family for solving the LASSO. We advocate our choice of the EGD as a judicious algorithm for these tasks. We present the EGD algorithm in details as well as some experimental results. We insist on the qualities of the EGD for reinforcement learning.", "POMDPs provide a principled framework for planning under uncertainty, but are computationally intractable, due to the \"curse of dimensionality\" and the \"curse of history\". This paper presents an online POMDP algorithm that alleviates these difficulties by focusing the search on a set of randomly sampled scenarios. A Determinized Sparse Partially Observable Tree (DESPOT) compactly captures the execution of all policies on these scenarios. Our Regularized DESPOT (R-DESPOT) algorithm searches the DESPOT for a policy, while optimally balancing the size of the policy and its estimated value obtained under the sampled scenarios. We give an output-sensitive performance bound for all policies derived from a DESPOT, and show that R-DESPOT works well if a small optimal policy exists. We also give an anytime algorithm that approximates R-DESPOT. Experiments show strong results, compared with two of the fastest online POMDP algorithms. Source code along with experimental settings are available at http: bigbird.comp.nus.edu.sg pmwiki farm appl .", "" ] }
1812.01285
2902058571
This paper presents a novel method for rare event detection from an image pair with class-imbalanced datasets. A straightforward approach for event detection tasks is to train a detection network from a large-scale dataset in an end-to-end manner. However, in many applications such as building change detection on satellite images, few positive samples are available for the training. Moreover, scene image pairs contain many trivial events, such as in illumination changes or background motions. These many trivial events and the class imbalance problem lead to false alarms for rare event detection. In order to overcome these difficulties, we propose a novel method to learn disentangled representations from only low-cost negative samples. The proposed method disentangles different aspects in a pair of observations: variant and invariant factors that represent trivial events and image contents, respectively. The effectiveness of the proposed approach is verified by the quantitative evaluations on four change detection datasets, and the qualitative analysis shows that the proposed method can acquire the representations that disentangle rare events from trivial ones.
In change detection tasks, several works have attempted to overcome the difficulties of data collection and cumbersome trivial events as described in the previous section. In order to save the cost of annotation, @cite_18 proposed a weakly supervised method that requires only image-level labels to train their change segmentation models. Although their work saves the pixel-level annotation cost, it still requires image-level labels, which are difficult to collect for rare change events. To address trivial events, several works on video surveillance tasks @cite_36 @cite_24 utilize background modeling techniques in which foreground changes are detected as outliers. However, these works assume a continuous frame as the input, and their application is limited to change detection in video frames. @cite_20 proposed a semi-supervised method to detect damaged areas from pairs of satellite images. In their method, a bag-of-visual-words vector is extracted for hierarchical shape descriptors and a support vector machine classifier is trained on the extracted features. Since their method is based on the carefully chosen feature descriptors specialized for their task, the method lacks generalizability for application in other domains.
{ "cite_N": [ "@cite_36", "@cite_18", "@cite_20", "@cite_24" ], "mid": [ "", "2617840581", "1894683164", "2102625004" ], "abstract": [ "", "Conventional change detection methods require a large number of images to learn background models or depend on tedious pixel-level labeling by humans. In this paper, we present a weakly supervised approach that needs only image-level labels to simultaneously detect and localize changes in a pair of images. To this end, we employ a deep neural network with DAG topology to learn patterns of change from image-level labeled training data. On top of the initial CNN activations, we define a CRF model to incorporate the local differences and context with the dense connections between individual pixels. We apply a constrained mean-field algorithm to estimate the pixel-level labels, and use the estimated labels to update the parameters of the CNN in an iterative EM framework. This enables imposing global constraints on the observed foreground probability mass function. Our evaluations on four benchmark datasets demonstrate superior detection and localization performance.", "Satellite imagery is a valuable source of information for assessing damages in distressed areas undergoing a calamity, such as an earthquake or an armed conflict. However, the sheer amount of data required to be inspected for this assessment makes it impractical to do it manually. To address this problem, we present a semi-supervised learning framework for large-scale damage detection in satellite imagery. We present a comparative evaluation of our framework using over 88 million images collected from 4, 665 KM2 from 12 different locations around the world. To enable accurate and efficient damage detection, we introduce a novel use of hierarchical shape features in the bags-of-visual words setting. We analyze how practical factors such as sun, sensor-resolution, satellite-angle, and registration differences impact the effectiveness our proposed representation, and compare it to five alternative features in multiple learning settings. Finally, we demonstrate through a user-study that our semi-supervised framework results in a ten-fold reduction in human annotation time at a minimal loss in detection accuracy compared to manual inspection", "A common method for real-time segmentation of moving regions in image sequences involves \"background subtraction\", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow." ] }
1812.01285
2902058571
This paper presents a novel method for rare event detection from an image pair with class-imbalanced datasets. A straightforward approach for event detection tasks is to train a detection network from a large-scale dataset in an end-to-end manner. However, in many applications such as building change detection on satellite images, few positive samples are available for the training. Moreover, scene image pairs contain many trivial events, such as in illumination changes or background motions. These many trivial events and the class imbalance problem lead to false alarms for rare event detection. In order to overcome these difficulties, we propose a novel method to learn disentangled representations from only low-cost negative samples. The proposed method disentangles different aspects in a pair of observations: variant and invariant factors that represent trivial events and image contents, respectively. The effectiveness of the proposed approach is verified by the quantitative evaluations on four change detection datasets, and the qualitative analysis shows that the proposed method can acquire the representations that disentangle rare events from trivial ones.
Our work is technically inspired by @cite_30 . The method by @cite_30 learns common and specific features between two different image domains. The key difference between their work and ours is that, in event detection tasks, the images in a pair come from the same'' domain. Since there are no domain biases in the images, we cannot resort to adversarial discriminators when we learn common features. Instead, a distance function in the feature space is used in our work. Since our method is based on a probabilistic latent variable model, rich information of posterior distribution can be used for measuring a distance between features. This is an advantage of using VAE instead of the classical auto-encoders used by @cite_30 .
{ "cite_N": [ "@cite_30" ], "mid": [ "2953127297" ], "abstract": [ "The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process." ] }
1812.01200
2902419372
The number of triangles ( @math ) is an important metric to analyze massive graphs. It is also used to compute clustering coefficient in networks. This paper proposes a new algorithm called PES (Priority Edge Sampling) to estimate triangles in the streaming model where we need to minimize the memory window. PES combines edge sampling and reservoir sampling. Compared with the state-of-the-art streaming algorithms, PES outperforms consistently. The results are verified extensively in 48 large real-world networks in different domains and structures. The performance ratio can be as large as 11. More importantly, the ratio grows with data size almost exponentially. This is especially important in the era of big data--while we can tolerate existing algorithms for smaller datasets, our method is indispensable in very large data sampling. In addition to empirical comparisons, we also proved that the estimator is unbiased, and derived the variance.
Next we need to understand the variance of @math . Although MASCOT gave a similar algorithm, they did not give its variance. We derived the variance of @math and present it in the form of Relative Standard Error (RSE= @math ) in Theorem . We use RSE instead of variance that is commonly used. This is because variance depends on the ground truth, which changes from data to data. This is especially inconvenient when evaluating multiple data sets--a larger variance in one data may be better than a smaller variance in another data. The variance of NES is adapted but different from the direct sampling algorithm in @cite_3 to accommodate the streaming model. The main difference is that in NES, to identify a closed wedge over a stream, first its two edges need to be added into @math ; then its third edge needs to be visited in the rest of the stream.
{ "cite_N": [ "@cite_3" ], "mid": [ "2536223246" ], "abstract": [ "The number of triangles in a graph is an important metric for understanding the graph. It is also directly related to the clustering coefficient of a graph, which is one of the most important indicator for social networks. Counting the number of triangles is computationally expensive for very large graphs. Hence, estimation is necessary for large graphs, particularly for graphs that are hidden behind searchable interfaces where the graphs in their entirety are not available. For instance, user networks in Twitter and Facebook are not available for third parties to explore their properties directly. This paper proposes a new method to estimate the number of triangles based on random edge sampling. It improves the traditional random edge sampling by probing the edges that have a higher probability of forming triangles. The method outperforms the traditional method consistently, and can be better by orders of magnitude when the graph is very large. The result is demonstrated on 20 graphs, including the largest graphs we can find. More importantly, we proved the improvement ratio, and verified our result on all the datasets. The analytical results are achieved by simplifying the variances of the estimators based on the assumption that the graph is very large. We believe that such big data assumption can lead to interesting results not only in triangle estimation, but also in other sampling problems." ] }
1812.01261
2902452957
The field of automatic video generation has received a boost thanks to the recent Generative Adversarial Networks (GANs). However, most existing methods cannot control the contents of the generated video using a text caption, losing their usefulness to a large extent. This particularly affects human videos due to their great variety of actions and appearances. This paper presents Conditional Flow and Texture GAN (CFT-GAN), a GAN-based video generation method from action-appearance captions. We propose a novel way of generating video by encoding a caption (e.g., "a man in blue jeans is playing golf") in a two-stage generation pipeline. Our CFT-GAN uses such caption to generate an optical flow (action) and a texture (appearance) for each frame. As a result, the output video reflects the content specified in the caption in a plausible way. Moreover, to train our method, we constructed a new dataset for human video generation with captions. We evaluated the proposed method qualitatively and quantitatively via an ablation study and a user study. The results demonstrate that CFT-GAN is able to successfully generate videos containing the action and appearances indicated in the captions.
The task of automatic video generation has been also approached using GANs. However, video generation is more challenging, since it requires consistency between frames and motion should be plausible. This is particularly challenging in the case of human motion generation. Video GAN (VGAN) @cite_24 achieves scene-consistent videos by generating the foreground and background separately. This method consists of 3D convolutions that learn motion information and appearance information simultaneously. However, capturing both motion and appearance using single-stream 3D convolutional networks causes generated videos to have problems with either their visual appearance or motion. Recent methods @cite_20 @cite_9 explore the fact that videos consist of motion and appearance. In @cite_20 , a hierarchical video generation system is proposed: Flow and Texture GAN (FTGAN). FTGAN consists of two components: FlowGAN generates the motion of the video, which is used by TextureGAN to generate videos. This method is able to successfully generate realistic video that contains plausible motion and consistent scenes.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_20" ], "mid": [ "2520707650", "", "2883938080" ], "abstract": [ "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "", "Due to the emergence of Generative Adversarial Networks, video synthesis has witnessed exceptional breakthroughs. However, existing methods lack a proper representation to explicitly control the dynamics in videos. Human pose, on the other hand, can represent motion patterns intrinsically and interpretably, and impose the geometric constraints regardless of appearance. In this paper, we propose a pose guided method to synthesize human videos in a disentangled way: plausible motion prediction and coherent appearance generation. In the first stage, a Pose Sequence Generative Adversarial Network (PSGAN) learns in an adversarial manner to yield pose sequences conditioned on the class label. In the second stage, a Semantic Consistent Generative Adversarial Network (SCGAN) generates video frames from the poses while preserving coherent appearances in the input image. By enforcing semantic consistency between the generated and ground-truth poses at a high feature level, our SCGAN is robust to noisy or abnormal poses. Extensive experiments on both human action and human face datasets manifest the superiority of the proposed method over other state-of-the-arts." ] }
1812.01112
2902218031
Synthetic DNA can in principle be used for the archival storage of arbitrary data. Because errors are introduced during DNA synthesis, storage, and sequencing, an error-correcting code (ECC) is necessary for error-free recovery of the data. Previous work has utilized ECCs that can correct substitution errors, but not insertion or deletion errors (indels), instead relying on sequencing depth and multiple alignment to detect and correct indels -- in effect an inefficient multiple-repetition code. This paper describes an ECC, termed "HEDGES", that corrects simultaneously for substitutions, insertions, and deletions in a single read. Varying code rates allow for correction of up to 10 nucleotide errors and achieve 50 or better of the estimated Shannon limit.
Church et. al @cite_16 synthesized oligomers of length 159, each of which contained both address information (ordering of oligomers in the message) and payload. There was no explicit ECC. The pool of oligomers was sequenced to a depth 3000x, allowing recovery of a consensus sequence with high probability. High-depth coverage can correct sequencing errors, but not synthesis errors. Indeed, the final results contained 10-bit errors.
{ "cite_N": [ "@cite_16" ], "mid": [ "2025383314" ], "abstract": [ "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing." ] }
1812.01112
2902218031
Synthetic DNA can in principle be used for the archival storage of arbitrary data. Because errors are introduced during DNA synthesis, storage, and sequencing, an error-correcting code (ECC) is necessary for error-free recovery of the data. Previous work has utilized ECCs that can correct substitution errors, but not insertion or deletion errors (indels), instead relying on sequencing depth and multiple alignment to detect and correct indels -- in effect an inefficient multiple-repetition code. This paper describes an ECC, termed "HEDGES", that corrects simultaneously for substitutions, insertions, and deletions in a single read. Varying code rates allow for correction of up to 10 nucleotide errors and achieve 50 or better of the estimated Shannon limit.
In the largest-scale experiment to date, Organick et. al @cite_5 encoded and recovered, error-free, more than 200 MB of data. R-S coding was used across strands, with no explicit error correction within a strand. Substitutions and indels within a strand were corrected by multiple alignment and consensus calling. Coverage was 5x for high-quality Illumina sequencing, rising to 36x to 80x required for Nanopore technology.
{ "cite_N": [ "@cite_5" ], "mid": [ "2788257106" ], "abstract": [ "200 MB of digital data is stored in DNA, randomly accessed and recovered using an error-free approach." ] }
1812.01043
2902625477
An accurate and timely detection of diseases and pests in rice plants can help farmers in applying timely treatment on the plants and thereby can reduce the economic losses substantially. Recent developments in deep learning based convolutional neural networks (CNN) have greatly improved image classification accuracy. In this paper, we present deep learning based approaches to detect diseases and pests in rice plants using images captured in real life scenario. We have experimented with various state-of-the-art CNN architectures on our large dataset of rice diseases and pests collected manually from the field, which contain both inter-class and intra-class variations and have nine classes in total. The results show that we can effectively detect and recognize rice diseases and pests using CNN with the best accuracy of 99.53 on test set using CNN architecture, VGG16. Though the accuracy of CNN models built on VGG16 or other similar architectures is impressive, these models are not suitable for mobile devices due to their large size having a huge number of parameters. To solve this problem, we propose a new CNN architecture, namely stacked CNN, that exploits two stage training to reduce the size of the model significantly while at the same time maintaining high classification accuracy. Our experimental results show that we achieve 95 test accuracy with stacked CNN, while reducing the model size by 98 compared to VGG16. This kind of memory efficient CNN architectures can contribute in rice disease detection and identification based mobile application development.
Many approaches have been applied to correctly identify plant diseases from images. Most of them use general image processing techniques, SVM classifier, K-mean clustering, genetic algorithm @cite_2 etc. Hand engineered features often reflect our limited knowledge about an image, and so, we cannot get more insight. Recently, some researchers are using neural network based approaches in this area. Deep neural networks are much better in disease recognition from images than the traditional image processing techniques.
{ "cite_N": [ "@cite_2" ], "mid": [ "2548258044" ], "abstract": [ "Abstract Agricultural productivity is something on which economy highly depends. This is the one of the reasons that disease detection in plants plays an important role in agriculture field, as having disease in plants are quite natural. If proper care is not taken in this area then it causes serious effects on plants and due to which respective product quality, quantity or productivity is affected. For instance a disease named little leaf disease is a hazardous disease found in pine trees in United States. Detection of plant disease through some automatic technique is beneficial as it reduces a large work of monitoring in big farms of crops, and at very early stage itself it detects the symptoms of diseases i.e. when they appear on plant leaves. This paper presents an algorithm for image segmentation technique which is used for automatic detection and classification of plant leaf diseases. It also covers survey on different diseases classification techniques that can be used for plant leaf disease detection. Image segmentation, which is an important aspect for disease detection in plant leaf disease, is done by using genetic algorithm." ] }
1812.01043
2902625477
An accurate and timely detection of diseases and pests in rice plants can help farmers in applying timely treatment on the plants and thereby can reduce the economic losses substantially. Recent developments in deep learning based convolutional neural networks (CNN) have greatly improved image classification accuracy. In this paper, we present deep learning based approaches to detect diseases and pests in rice plants using images captured in real life scenario. We have experimented with various state-of-the-art CNN architectures on our large dataset of rice diseases and pests collected manually from the field, which contain both inter-class and intra-class variations and have nine classes in total. The results show that we can effectively detect and recognize rice diseases and pests using CNN with the best accuracy of 99.53 on test set using CNN architecture, VGG16. Though the accuracy of CNN models built on VGG16 or other similar architectures is impressive, these models are not suitable for mobile devices due to their large size having a huge number of parameters. To solve this problem, we propose a new CNN architecture, namely stacked CNN, that exploits two stage training to reduce the size of the model significantly while at the same time maintaining high classification accuracy. Our experimental results show that we achieve 95 test accuracy with stacked CNN, while reducing the model size by 98 compared to VGG16. This kind of memory efficient CNN architectures can contribute in rice disease detection and identification based mobile application development.
A real time tomato plant disease detector was built using deep learning in @cite_17 . They considered simultaneous occurrence of multiple diseases and pests, and they also considered different infected areas like stem, leaves, fruits etc. They also collected images of different stages of the same disease. The dataset consisted of about 5000 images collected from different farms of Korea. Most of these images had heterogeneous background. Several geometric and intensity transformations were used to increase the number of images. The authors used three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which they considered as deep learning meta-architectures". Each of these meta-architectures were combined with deep feature extractors" such as VGG16 and Residual Network (ResNet). Their models both recognized and localized nine different diseases and pests with the best accuracy of 85.98
{ "cite_N": [ "@cite_17" ], "mid": [ "2753403518" ], "abstract": [ "Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called “deep learning meta-architectures”. We combine each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant’s surrounding area." ] }
1812.01254
2903260265
Autonomous vehicles have to navigate the surrounding environment with partial observability of other objects sharing the road. Sources of uncertainty in autonomous vehicle measurements include sensor fusion errors, limited sensor range due to weather or object detection latency, occlusion, and hidden parameters such as other human driver intentions. Behavior planning must consider all sources of uncertainty in deciding future vehicle maneuvers. This paper presents a scalable framework for risk-averse behavior planning under uncertainty by incorporating QMDP, unscented transform, and Monte Carlo tree search (MCTS). It is shown that upper confidence bound (UCB) for expanding the tree results in noisy Q-value estimates by the MCTS and a degraded performance of QMDP. A modification to action selection procedure in MCTS is proposed to achieve robust performance.
A tree search based approach for behavior planning has been proposed in @cite_11 where a deep reinforcement learning (DRL) agent is trained to optimize the tree search efficiency. The paper however does not address uncertainty and assumes a fully observed environment (MDP). @cite_0 proposes to use QMDP for single lane autonomous driving under uncertainty. The paper assumes a Gaussian model for sensor noise and uses normalized probability density values as weights. Since actual noise is not Gaussian and the transformation from observation to decision is non-linear, this model assumption may be prone to mismatch errors. @cite_10 frames highway planning as a cooperative perfect information game, and proposes to use MCTS to minimize a global, shared cost function. @cite_1 addresses the ramp merging scenario using a probabilistic graphical model.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "2149822156", "2793154907", "2514762535", "2599174273" ], "abstract": [ "In this paper, a point-based Markov Decision Process (QMDP) algorithm is used for robust single-lane autonomous driving behavior control under uncertainties. Autonomous vehicle decision making is modeled as a Markov Decision Process (MDP), then extended to a QMDP framework. Based on MDP QMDP, three kinds of uncertainties are taken into account: sensor noise, perception constraints and surrounding vehicles' behavior. In simulation, the QMDP-based reasoning framework makes the autonomous vehicle perform with differing levels of conservativeness corresponding to different perception confidence levels. Road tests also indicate that the proposed algorithm helps the vehicle in avoiding potentially unsafe situations under these uncertainties. In general, the results indicate that the proposed QMDP-based algorithm makes autonomous driving more robust to limited sensing ability and occasional sensor failures.", "Cooperative driving behavior is essential for driving in traffic, especially for ramp merging, lane changing or navigating intersections. Autonomous vehicles should also manage these situations by behaving cooperatively and naturally. The challenge of cooperative driving is estimating other vehicles' intentions. In this paper, we present a novel method to estimate other human-driven vehicles' intentions with the aim of achieving a natural and amenable cooperative driving behavior, without using wireless communication. The new approach allows the autonomous vehicle to cooperate with multiple observable merging vehicles on the ramp with a leading vehicle ahead of the autonomous vehicle in the same lane. To avoid calculating trajectories, simplify computation, and take advantage of mature Level-3 components, the new method reacts to merging cars by determining a following target for an off-the-shelf distance keeping module (ACC) which governs speed control of the autonomous vehicle. We train and evaluate the proposed model using real traffic data. Results show that the new approach has a lower collision rate than previous methods and generates more human driver-like behaviors in terms of trajectory similarity and time-to-collision to leading vehicles.", "Human drivers use nonverbal communication and anticipation of other drivers' actions to master conflicts occurring in everyday driving situations. Without a high penetration of vehicle-to-vehicle communication an autonomous vehicle has to have the possibility to understand intentions of others and share own intentions with the surrounding traffic participants. This paper proposes a cooperative combinatorial motion planning algorithm without the need for inter vehicle communication based on Monte Carlo Tree Search (MCTS). We motivate why MCTS is particularly suited for the autonomous driving domain. Furthermore, adoptions to the MCTS algorithm are presented as for example simultaneous decisions, the usage of the Intelligent Driver Model as microscopic traffic simulation, and a cooperative cost function. We further show simulation results of merging scenarios in highway-like situations to underline the cooperative nature of the approach.", "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road." ] }
1812.01254
2903260265
Autonomous vehicles have to navigate the surrounding environment with partial observability of other objects sharing the road. Sources of uncertainty in autonomous vehicle measurements include sensor fusion errors, limited sensor range due to weather or object detection latency, occlusion, and hidden parameters such as other human driver intentions. Behavior planning must consider all sources of uncertainty in deciding future vehicle maneuvers. This paper presents a scalable framework for risk-averse behavior planning under uncertainty by incorporating QMDP, unscented transform, and Monte Carlo tree search (MCTS). It is shown that upper confidence bound (UCB) for expanding the tree results in noisy Q-value estimates by the MCTS and a degraded performance of QMDP. A modification to action selection procedure in MCTS is proposed to achieve robust performance.
@cite_14 proposes POMCPOW and PFT-DPW as extensions of MCTS to POMDP settings with continuous observation and action spaces. In particular, the paper applies this problem to lane changing, and uses a particle filter to track predictions for parameters used to model driver intentions (assuming remaining observations are ideal). @cite_17 considers importance sampling with a tree search algorithm, where samples are obtained using a probability distribution. @cite_4 considers the joint estimation and control problem for a robot using an unscented Kalman filter @cite_9 to estimate different parameter values, and a variation of QMDP tree search that assumes full observability of the environment after the first step for computation efficiency. @cite_16 proposes a POMDP planning framework to handle automated driving at an unsignalized intersection where a particle filter is used to represent belief about the routes other vehicles may take. They sample from the particles and evaluate various longitudinal acceleration along EV's path and select the action achieving the highest expected reward at the end.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_16", "@cite_17" ], "mid": [ "", "2886887402", "1749494163", "2782474430", "2808915486" ], "abstract": [ "", "Real-world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modeling errors. However, information gathering actions often conflict with optimal actions for reaching control objectives, requiring a trade-off between exploration and exploitation. The specific problem setting considered here is for discrete-time nonlinear systems, with process noise, input-constraints, and parameter uncertainty. This article frames this problem as a Bayes-adaptive Markov decision process and solves it online using Monte Carlo tree search with an unscented Kalman filter to account for process noise and parameter uncertainty. This method is compared with certainty equivalent model predictive control and a tree search method that approximates the QMDP solution, providing insight into when information gathering is useful. Discrete time simulations characterize performance over a range of process noise and bounds on unknown parameters. An offline optimization method is used to select the Monte Carlo tree search parameters without hand-tuning. In lieu of recursive feasibility guarantees, a probabilistic bounding heuristic is offered that increases the probability of keeping the state within a desired region.", "This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples.", "Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.", "The partially observable Markov decision process (POMDP) provides a principled general framework for robot planning under uncertainty. Leveraging the idea of Monte Carlo sampling, recent POMDP planning algorithms have scaled up to various challenging robotic tasks, including, real-time online planning for autonomous vehicles. To further improve online planning performance, this paper presents IS-DESPOT, which introduces importance sampling to DESPOT, a state-of-the-art sampling-based POMDP algorithm for planning under uncertainty. Importance sampling improves DESPOT’s performance when there are critical, but rare events, which are difficult to sample. We prove that IS-DESPOT retains the theoretical guarantee of DESPOT. We demonstrate empirically that importance sampling significantly improves the performance of online POMDP planning for suitable tasks. We also present a general method for learning the importance sampling distribution." ] }
1812.01071
2902677310
Image inpainting is the task of filling-in missing regions of a damaged or incomplete image. In this work we tackle this problem not only by using the available visual data but also by incorporating image semantics through the use of generative models. Our contribution is twofold: First, we learn a data latent space by training an improved version of the Wasserstein generative adversarial network, for which we incorporate a new generator and discriminator architecture. Second, the learned semantic information is combined with a new optimization loss for inpainting whose minimization infers the missing content conditioned by the available data. It takes into account powerful contextual and perceptual content inherent in the image itself. The benefits include the ability to recover large regions by accumulating semantic information even it is not fully present in the damaged image. Experiments show that the presented method obtains qualitative and quantitative top-tier results in different experimental situations and also achieves accurate photo-realism comparable to state-of-the-art works.
Most inpainting methods found in the literature can be classified into two groups: model-based approaches and deep learning approaches. In the former, two main groups can be distinguished: local and non-local methods. In local methods, also denoted as geometry-oriented methods, images are modeled as functions with some degree of smoothness. @cite_1 @cite_16 @cite_6 @cite_43 @cite_18 . These methods show good performance in propagating smooth level lines or gradients, but fail in the presence of texture or for large missing regions. Non-local methods (also called exemplar- or patch-based) exploit the self-similarity prior by directly sampling the desired texture to perform the synthesis @cite_25 @cite_7 @cite_35 @cite_23 @cite_0 @cite_8 @cite_20 @cite_26 @cite_9 . They provide impressive results in inpainting textures and repetitive structures even in the case of large holes. However, both type of methods use redundancy of the incomplete input image: smoothness priors in the case of geometry-based and self-similarity principles in the non-local or patch-based ones. Figures (b) and (c) illustrate the inpainting results (the inpaining hole is shown in (a)) using a local method (in particular @cite_43 ) and the non-local method @cite_44 , respectively. As expected, the use of image semantics improve the results, as shown in (d).
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_43", "@cite_44", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2105038642", "", "2057016804", "1237115532", "2072259297", "2345034922", "2140865211", "", "1544005643", "2132250001", "", "", "2031832209", "2116013899", "1979331265" ], "abstract": [ "A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) \"texture synthesis\" algorithms for generating large image regions from sample textures and 2) \"inpainting\" techniques for filling in small image gaps. The former has been demonstrated for \"textures\"-repeating two-dimensional patterns with some stochasticity; the latter focus on linear \"structures\" which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single , efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques.", "", "We propose a method for automatically guiding patch-based image completion using mid-level structural cues. Our method first estimates planar projection parameters, softly segments the known region into planes, and discovers translational regularity within these planes. This information is then converted into soft constraints for the low-level completion algorithm by defining prior probabilities for patch offsets and transformations. Our method handles multiple planes, and in the absence of any detected planes falls back to a baseline fronto-parallel image completion algorithm. We validate our technique through extensive comparisons with state-of-the-art algorithms on a variety of scenes.", "The success of some recent texture synthesis methods, see [8, 17], suggests that there exists an underlying formulation explaining their performance and paving the way to more involved modeling. Based on their ideas, we formalize a low-level global deterministic solution for image inpainting. A correspondence map is defined as linking each blank or missing pixel to the pixel where its value is taken from, in the seed image. The above-mentioned algorithms are seen as descent procedures to minimize a functional of this correspondence map, the inpainting energy. We discuss why they should not be seen as procedures to sample a probability distribution on the correspondence maps. We therefore question the claims that probability is anywhere involved at this explanatory level. The algorithm we use is mostly taken from [17]. The latter however suffers from a strong directional bias, the direction in which texture is grown. We restore rotationinvariance at the level of both the target function and the algorithm. Our encouraging numerical results could not have been obtained by a directional texture-growing algorithm.", "Among all methods for reconstructing missing regions in a digital image, the so-called exemplar-based algorithms are very efficient and often produce striking results. They are based on the simple idea—initially used for texture synthesis—that the unknown part of an image can be reconstructed by simply pasting samples extracted from the known part. Beyond heuristic considerations, there have been very few contributions in the literature to explain from a mathematical point of view the performances of these purely algorithmic and discrete methods. More precisely, a recent paper by Levina and Bickel [Ann. Statist., 34 (2006), pp. 1751–1773] provides a theoretical explanation of their ability to recover very well the texture, but nothing equivalent has been done so far for the recovery of geometry. Our purpose in this paper is twofold: (1) to propose well-posed variational models in the continuous domain that can be naturally associated to exemplar-based algorithms; (2) to investigate their ability to recons...", "This paper presents a new method for exemplar-based image inpainting using transformed patches. We build upon a recent affine invariant self-similarity measure which automatically transforms patches to compare them in an appropriate manner. As a consequence, it intrinsically extends the set of available source patches to copy information from. When comparing two patches, instead of searching for the appropriate patch transformation in a highly dimensional parameter space, our approach allows us to determine a single transformation from the texture content in both patches. We incorporate the affine invariant similarity measure in a variational formulation for inpainting and present an algorithm together with experimental results illustrating this approach.", "Object recognition, robotic vision, occluding noise removal or photograph design require the ability to perform disocclusion. We call disocclusion the recovery of hidden parts of objects in a digital image by interpolation from the vicinity of the occluded area. It is shown in this paper how disocclusion can be performed by means of a level lines structure, which offers a reliable, complete and contrast-invariant representation of an image, in contrast to edges. Level lines based disocclusion yields a solution that may have strong discontinuities, which is not possible with PDE-based interpolation. Moreover, the proposed method is fully compatible with Kanizsa's (1996) theory of \"amodal completion\".", "", "Image inpainting techniques have been widely investigated to remove undesired objects in an image. Conventionally, missing parts in an image are completed by optimizing the objective function using pattern similarity. However, unnatural textures are easily generated due to two factors: (1) available samples in the image are quite limited, and (2) pattern similarity is one of the required conditions but is not sufficient for reproducing natural textures. In this paper, we propose a new energy function based on the pattern similarity considering brightness changes of sample textures (for (1)) and introducing spatial locality as an additional constraint (for (2)). The effectiveness of the proposed method is successfully demonstrated by qualitative and quantitative evaluation. Furthermore, the evaluation methods used in much inpainting research are discussed.", "Given an image where a specified region is unknown, image inpainting or image completion is the problem of inferring the image content in this region. Traditional retouching or inpainting is the practice of restoring aged artwork, where damaged or missing portions are repainted based on the surrounding content to approximate the original appearance. In the context of digital images, inpainting is used to restore regions of an image that are corrupted by noise or where the data is missing. Inpainting is also used to solve disocclusion, to estimate the scene behind an obscuring foreground object. A popular use of digital inpainting is object removal, for example, to remove a trashcan that disrupts a scene of otherwise natural beauty. Inpainting is an interpolation problem, filling the unknown region with a condition to agree with the known image on the boundary. A classical solution for such an interpolation is to solve Laplace’s equation. However, Laplace’s equation is usually unsatisfactory for images since it is overly smooth. It cannot recover a step edge passing through the region. Total variation (TV) regularization is an effective inpainting technique which is capable of recovering sharp edges under some conditions (these conditions will be explained). The use of TV regularization was originally developed for image denoising by Rudin, Osher, and Fatemi [3] and then applied to inpainting by Chan and Shen [13]. TV-regularized inpainting does not create texture, the method is limited to inpainting the geometric structure.", "", "", "Dedicated to Stanley Osher on the occasion of his 60th birthday. Abstract. Inspired by the recent work of on digital inpaintings (SIGGRAPH 2000), we develop general mathematical models for local inpaintings of nontexture images. On smooth regions, inpaintings are connected to the harmonic and biharmonic extensions, and inpainting orders are analyzed. For inpaintings involving the recovery of edges, we study a variational model that is closely connected to the classical total variation (TV) denoising model of Rudin, Osher, and Fatemi (Phys. D, 60 (1992), pp. 259-268). Other models are also discussed based on the Mumford-Shah regularity (Comm. Pure Appl. Math., XLII (1989), pp. 577-685) and curvature driven diffusions (CDD) of Chan and Shen (J. Visual Comm. Image Rep., 12 (2001)). The broad applications of the inpainting models are demonstrated through restoring scratched old photos, disocclusion in vision analysis, text removal, digital zooming, and edge-based image coding.", "A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.", "Non-local methods for image denoising and inpainting have gained considerable attention in recent years. This is in part due to their superior performance in textured images, a known weakness of purely local methods. Local methods on the other hand have demonstrated to be very appropriate for the recovering of geometric structures such as image edges. The synthesis of both types of methods is a trend in current research. Variational analysis in particular is an appropriate tool for a unified treatment of local and non-local methods. In this work we propose a general variational framework for non-local image inpainting, from which important and representative previous inpainting schemes can be derived, in addition to leading to novel ones. We explicitly study some of these, relating them to previous work and showing results on synthetic and real images." ] }
1812.01214
2902596607
Neural networks currently dominate the machine learning community and they do so for good reasons. Their accuracy on complex tasks such as image classification is unrivaled at the moment and with recent improvements they are reasonably easy to train. Nevertheless, neural networks are lacking robustness and interpretability. Prototype-based vector quantization methods on the other hand are known for being robust and interpretable. For this reason, we propose techniques and strategies to merge both approaches. This contribution will particularly highlight the similarities between them and outline how to construct a prototype-based classification layer for multilayer networks. Additionally, we provide an alternative, prototype-based, approach to the classical convolution operation. Numerical results are not part of this report, instead the focus lays on establishing a strong theoretical framework. By publishing our framework and the respective theoretical considerations and justifications before finalizing our numerical experiments we hope to jump-start the incorporation of prototype-based learning in neural networks and vice versa.
One of the first contributions reporting an approach for the fusion of LVQ with NNs is @cite_43 . Later on, the idea was formulated more precisely @cite_80 . In @cite_64 , a fused network was applied to train a network on MNIST and Cifar10. They showed a way how a regularization term can be used to get a generative and discriminative model at the same time. Moreover, they exemplary applied a reject strategy and showed that the concept of incremental learning is also applicable.
{ "cite_N": [ "@cite_43", "@cite_64", "@cite_80" ], "mid": [ "2903485739", "2963314614", "2751446162" ], "abstract": [ "", "Convolutional neural networks (CNNs) have been widely used for image classification. Despite its high accuracies, CNN has been shown to be easily fooled by some adversarial examples, indicating that CNN is not robust enough for pattern classification. In this paper, we argue that the lack of robustness for CNN is caused by the softmax layer, which is a totally discriminative model and based on the assumption of closed world (i.e., with a fixed number of categories). To improve the robustness, we propose a novel learning framework called convolutional prototype learning (CPL). The advantage of using prototypes is that it can well handle the open world recognition problem and therefore improve the robustness. Under the framework of CPL, we design multiple classification criteria to train the network. Moreover, a prototype loss (PL) is proposed as a regularization to improve the intra-class compactness of the feature representation, which can be viewed as a generative model based on the Gaussian assumption of different classes. Experiments on several datasets demonstrate that CPL can achieve comparable or even better results than traditional CNN, and from the robustness perspective, CPL shows great advantages for both the rejection and incremental category learning tasks.", "The advantage of prototype based learning vector quantizers are the intuitive and simple model adaptation as well as the easy interpretability of the prototypes as class representatives for the class distribution to be learned. Although they frequently yield competitive performance and show robust behavior nowadays powerful alternatives have increasing attraction. Particularly, deep architectures of multilayer networks achieve frequently very high accuracies and are, thanks to modern graphic processor units use for calculation, trainable in acceptable time. In this conceptual paper we show, how we can combine both network architectures to benefit from their advantages. For this purpose, we consider learning vector quantizers in terms of feedforward network architectures and explain how it can be combined effectively with multilayer or single-layer feedforward network architectures. This approach includes deep and flat architectures as well as the popular extreme learning machines. For the resulting networks, the multi- single-layer networks act as adaptive filters like in signal processing while the interpretability of the prototype-based learning vector quantizers is kept for the resulting filtered feature space. In this way a powerful combination of two successful architectures is obtained." ] }
1812.01214
2902596607
Neural networks currently dominate the machine learning community and they do so for good reasons. Their accuracy on complex tasks such as image classification is unrivaled at the moment and with recent improvements they are reasonably easy to train. Nevertheless, neural networks are lacking robustness and interpretability. Prototype-based vector quantization methods on the other hand are known for being robust and interpretable. For this reason, we propose techniques and strategies to merge both approaches. This contribution will particularly highlight the similarities between them and outline how to construct a prototype-based classification layer for multilayer networks. Additionally, we provide an alternative, prototype-based, approach to the classical convolution operation. Numerical results are not part of this report, instead the focus lays on establishing a strong theoretical framework. By publishing our framework and the respective theoretical considerations and justifications before finalizing our numerical experiments we hope to jump-start the incorporation of prototype-based learning in neural networks and vice versa.
The definition of a network directly on the discrete cluster representation of a VQ layer was first mentioned in the VQ-Variational-Autoencoder (VQ-VAE) @cite_10 . There, the output of the VQ is a map where a pixel has the integer number @math of the closest prototype. Moreover, the output is defined as the latent space of the VQ-VAE. The method was trained via the gradient-straight-through method. The obtained results are promising. Nevertheless, the authors were unable to train the model from scratch even with soft-to-hard assignment proposed in @cite_75 . We made similar observations like in @cite_10 for our simulations performed so far. If we train using soft-to-hard assignments, the model is able to invert the continuous relaxations. We also observed that training by a simple gradient-straight-through approximation is not stable, when the model is trained from scratch. However, our proposed gradient approximation is working well for many performed settings, either to train from scratch or on a pre-trained network.
{ "cite_N": [ "@cite_10", "@cite_75" ], "mid": [ "2963799213", "2964164354" ], "abstract": [ "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of posterior collapse'' -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.", "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." ] }
1907.00593
2954957482
With the development of deep neural networks, the size of network models becomes larger and larger. Model compression has become an urgent need for deploying these network models to mobile or embedded devices. Model quantization is a representative model compression technique. Although a lot of quantization methods have been proposed, many of them suffer from a high quantization error caused by a long-tail distribution of network weights. In this paper, we propose a novel quantization method, called weight normalization based quantization (WNQ), for model compression. WNQ adopts weight normalization to avoid the long-tail distribution of network weights and subsequently reduces the quantization error. Experiments on CIFAR-100 and ImageNet show that WNQ can outperform other baselines to achieve state-of-the-art performance.
Besides model quantization methods, pruning @cite_31 @cite_15 @cite_28 @cite_30 @cite_38 , tensor decomposition @cite_36 @cite_22 @cite_35 and knowledge distillation @cite_29 @cite_11 are also widely used techniques for model compression. We do not discuss the details about other techniques except quantization, because the focus of this paper is on quantization.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_35", "@cite_22", "@cite_15", "@cite_28", "@cite_36", "@cite_29", "@cite_31", "@cite_11" ], "mid": [ "2964233199", "2964023041", "2883028206", "1798945469", "2963000224", "2962965870", "2177847924", "1821462560", "2963674932", "2620998106" ], "abstract": [ "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.", "Convolutional Neural Networks(CNNs) are both computation and memory intensive which hindered their deployment in many resource efficient devices. Inspired by neural science research, we propose the synaptic pruning: a data-driven method to prune connections between convolution layers with a newly proposed class of parameters called Synaptic Strength. Synaptic Strength is designed to capture the importance of a synapse based on the amount of information it transports. Experimental results show the effectiveness of our approach empirically. On CIFAR-10, we can prune various CNN models with up to 96 connections removed, which results in significant size reduction and computation saving. Further evaluation on ImageNet demonstrates that synaptic pruning is able to discover efficient models which are competitive to state-of-the-art compact CNNs such as MobileNet-V2 and NasNet-Mobile. Our contribution is summarized as follows: (1) We introduce Synaptic Strength, a new class of parameters for convolution layer to indicate the importance of each connection. (2) Our approach can prune various CNN models with high compression without compromising accuracy. (3) Further investigation shows, the proposed Synaptic Strength is a better indicator for kernel pruning compare with the previous approach both in empirical results and theoretical analysis.", "In this paper we propose a novel decomposition method based on filter group approximation, which can significantly reduce the redundancy of deep convolutional neural networks (CNNs) while maintaining the majority of feature representation. Unlike other low-rank decomposition algorithms which operate on spatial or channel dimension of filters, our proposed method mainly focuses on exploiting the filter group structure for each layer. For several commonly used CNN models, including VGG and ResNet, our method can reduce over 80 floating-point operations (FLOPs) with less accuracy drop than state-of-the-art methods on various image classification datasets. Besides, experiments demonstrate that our method is conducive to alleviating degeneracy of the compressed network, which hurts the convergence and performance of the network.", "Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 .", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful large network or ensemble to a small network, in order to meet the low-memory or fast execution requirements. In this paper, we present a deep mutual learning (DML) strategy. Different from the one-way transfer between a static pre-defined teacher and a student in model distillation, with DML, an ensemble of students learn collaboratively and teach each other throughout the training process. Our experiments show that a variety of network architectures benefit from mutual learning and achieve compelling results on both category and instance recognition tasks. Surprisingly, it is revealed that no prior powerful teacher network is necessary - mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher." ] }
1907.00718
2954653812
Quantification of uncertainty in production injection forecasting is an important aspect of reservoir simulation studies. Conventional approaches include intrusive Galerkin-based methods (e.g., generalized polynomial chaos (gPC) and stochastic collocation (SC) methods) and non-intrusive Monte Carlo (MC) based methods. Nevertheless, the quantification is conducted in reformulations of the underlying stochastic PDEs with fixed well controls. If one wants to take various well control plans into account, expensive computations need to be repeated for each well design independently. In this project, we take advantages of the equation-free spirit of convolutional neural network (CNN) to overcome this challenge and thus achieve the flexibility of efficient uncertainty quantification with various well controls. We are interested in the development of surrogate models for uncertainty quantification and propagation in reservoir simulations using a deep convolutional encoder-decoder network as an analogue to the image-to-image regression tasks in computer science. First, a U-Net architecture is applied to replace conventional expensive deterministic PDE solver. Then we adopt the idea from shape-guided image generation using variational U-Net and design a new variational U-Net architecture for "control-guided" reservoir simulation. Backward propagation is learned in the network to extract the hidden physical quantities and then predict the future production by the learned forward propagation using the hidden variable with various well controls. Comparisons in computational efficiency are made between our proposed CNN approach and conventional MC approach. Significant improvements in computational speed with reasonable accuracy loss are observed in the numerical tests.
The subsurface is such a complex system involving interactive physics in multiple scales that there is no accurate deterministic model. Thus probabilistic models have been explored to account for the uncertainties from model errors, model parametrization, heterogeneity of the environment and various geometries of the boundary, giving rise to extensive research interests in uncertainty quantification in reservoir engineering. A dominant strategy for such problems is to solve the deterministic problem at a finite large number of realizations of the random inputs using Monte Carlo sampling @cite_5 . Variations of MC methods, including quasi-MC @cite_23 , Multi-Level-MC(MLMC) @cite_13 @cite_6 , stratified MC @cite_21 , and etc, are designed to make more efficient sampling. Intrusive methods, (e.g., moment equations @cite_19 , polynomial chaos @cite_7 , stochastic collocation @cite_14 , method of distributions @cite_4 ), as alternatives to MC simulations, have been widely studied in the past decades. Although efficient in some problems, intrusive methods, in general, require additional efforts in reformulation of the model and reconstruction of the deterministic solvers. In the meantime, it is well known that all intrusive methods suffer from curse of dimensionality (in random space). We refer to reviews on these topics in @cite_22 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_5", "@cite_13" ], "mid": [ "", "2342288376", "1880751280", "2136602340", "", "2553111806", "1570886746", "2785458956", "1985093013", "2014945091" ], "abstract": [ "", "", "The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC methods through numerical examples and rigorous development; details the procedure for converting stochastic equations into deterministic ones; using both the Galerkin and collocation approaches; and discusses the distinct differences and challenges arising from high-dimensional problems. The last section is devoted to the application of gPC methods to critical areas such as inverse problems and data assimilation. Ideal for use by graduate students and researchers both in the classroom and for self-study, Numerical Methods for Stochastic Computations provides the required tools for in-depth research related to stochastic computations.The first graduate-level textbook to focus on the fundamentals of numerical methods for stochastic computations Ideal introduction for graduate courses or self-study Fast, efficient, and accurate numerical methods Polynomial approximation theory and probability theory included Basic gPC methods illustrated through examples", "We present a new algorithm to model the input uncertainty and its propagation in incompressible flow simulations. The stochastic input is represented spectrally by employing orthogonal polynomial functionals from the Askey scheme as trial basis to represent the random space. A standard Galerkin projection is applied in the random dimension to obtain the equations in the weak form. The resulting system of deterministic equations is then solved with standard methods to obtain the solution for each random mode. This approach can be considered as a generalization of the original polynomial chaos expansion, first introduced by Wiener [Am. J. Math. 60 (1938) 897]. The original method employs the Hermite polynomials (one of the 13 members of the Askey scheme) as the basis in random space. The algorithm is applied to micro-channel flows with random wall boundary conditions, and to external flows with random freestream. Efficiency and convergence are studied by comparing with exact solutions as well as numerical solutions obtained by Monte Carlo simulations. It is shown that the generalized polynomial chaos method promises a substantial speed-up compared with the Monte Carlo method. The utilization of different type orthogonal polynomials from the Askey scheme also provides a more efficient way to represent general non-Gaussian processes compared with the original Wiener-Hermite expansions.", "", "", "Quantitative descriptions of flow and transport in subsurface environmentsare often hampered by uncertainty in the input parameters. Treatingsuch parameters as random fields represents a useful tool for dealingwith uncertainty. We review the state of the art of stochasticdescription of hydrogeology with an emphasis on statisticallyinhomogeneous (nonstationary) models. Our focus is on composite mediamodels that allow one to estimate uncertainties both in geometricalstructure of geological media consisting of various materials and inphysical properties of these materials.", "", "We have sold 4300 copies worldwide of the first edition (1999). This new edition contains five completely new chapters covering new developments.", "The author’s presentation of multilevel Monte Carlo path simulation at the MCQMC 2006 conference stimulated a lot of research into multilevel Monte Carlo methods. This paper reviews the progress since then, emphasising the simplicity, flexibility and generality of the multilevel Monte Carlo approach. It also offers a few original ideas and suggests areas for future research." ] }
1907.00718
2954653812
Quantification of uncertainty in production injection forecasting is an important aspect of reservoir simulation studies. Conventional approaches include intrusive Galerkin-based methods (e.g., generalized polynomial chaos (gPC) and stochastic collocation (SC) methods) and non-intrusive Monte Carlo (MC) based methods. Nevertheless, the quantification is conducted in reformulations of the underlying stochastic PDEs with fixed well controls. If one wants to take various well control plans into account, expensive computations need to be repeated for each well design independently. In this project, we take advantages of the equation-free spirit of convolutional neural network (CNN) to overcome this challenge and thus achieve the flexibility of efficient uncertainty quantification with various well controls. We are interested in the development of surrogate models for uncertainty quantification and propagation in reservoir simulations using a deep convolutional encoder-decoder network as an analogue to the image-to-image regression tasks in computer science. First, a U-Net architecture is applied to replace conventional expensive deterministic PDE solver. Then we adopt the idea from shape-guided image generation using variational U-Net and design a new variational U-Net architecture for "control-guided" reservoir simulation. Backward propagation is learned in the network to extract the hidden physical quantities and then predict the future production by the learned forward propagation using the hidden variable with various well controls. Comparisons in computational efficiency are made between our proposed CNN approach and conventional MC approach. Significant improvements in computational speed with reasonable accuracy loss are observed in the numerical tests.
Recently, deep learning has been explored as a competitive methodology across fields such as fluid mechanics @cite_25 , hydrology @cite_24 @cite_3 , bioinformatics @cite_15 , high energy physics @cite_11 and others. In particular, @cite_10 adopted an end-to-end image-to image regression approach for surrogate modeling governed by stochastic PDEs with high-dimensional stochastic input in random porous media. In addition, the deep neural networks are set under a formal Bayesian formula to enable the network to express its uncertainty on its predictions when using limited training data. @cite_10 and our project both study two-dimensional, single phase, steady-state flow through a random permeability field. However, the emphasis of @cite_10 lies in Bayesian deep learning dealing with high dimensional random inputs but with fixed single well control. In this work, we focus on uncertainty propagation with , which would allow existing well information to infer the incomplete uncertain geological properties and evaluate potential well locations during simulation. In large-scale reservoir project like CCS, such quantification from simulations will be of significant importance and economic value in industry.
{ "cite_N": [ "@cite_3", "@cite_24", "@cite_15", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2767537294", "", "2311607323", "2784733489", "2585298970", "2125621954" ], "abstract": [ "Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.", "", "In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.", "Abstract We are interested in the development of surrogate models for uncertainty quantification and propagation in problems governed by stochastic PDEs using a deep convolutional encoder–decoder network in a similar fashion to approaches considered in deep learning for image-to-image regression tasks. Since normal neural networks are data-intensive and cannot provide predictive uncertainty, we propose a Bayesian approach to convolutional neural nets. A recently introduced variational gradient descent algorithm based on Stein's method is scaled to deep convolutional networks to perform approximate Bayesian inference on millions of uncertain network parameters. This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small. To evaluate the performance of this approach, we consider standard uncertainty quantification tasks for flow in heterogeneous media using limited training data consisting of permeability realizations and the corresponding velocity and pressure fields. The performance of the surrogate model developed is very good even though there is no underlying structure shared between the input (permeability) and output (flow pressure) fields as is often the case in the image-to-image regression models used in computer vision problems. Studies are performed with an underlying stochastic input dimensionality up to 4225 where most other uncertainty quantification methods fail. Uncertainty propagation tasks are considered and the predictive output Bayesian statistics are compared to those obtained with Monte Carlo estimates.", "It was only a matter of time before deep neural networks (DNNs) – deep learning – made their mark in turbulence modelling, or more broadly, in the general area of high-dimensional, complex dynamical systems. In the last decade, DNNs have become a dominant data mining tool for big data applications. Although neural networks have been applied previously to complex fluid flows, the article featured here ( , J. Fluid Mech. , vol. 807, 2016, pp. 155–166) is the first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models. As one often expects with modern DNNs, performance gains are achieved over competing state-of-the-art methods, suggesting that DNNs may play a critically enabling role in the future of modelling complex flows.", "High-energy particle colliders are important for finding new particles, but huge volumes of data must be searched through to locate them. Here, the authors show the use of deep-learning methods on benchmark data sets as an approach to improving such new particle searches." ] }
1907.00718
2954653812
Quantification of uncertainty in production injection forecasting is an important aspect of reservoir simulation studies. Conventional approaches include intrusive Galerkin-based methods (e.g., generalized polynomial chaos (gPC) and stochastic collocation (SC) methods) and non-intrusive Monte Carlo (MC) based methods. Nevertheless, the quantification is conducted in reformulations of the underlying stochastic PDEs with fixed well controls. If one wants to take various well control plans into account, expensive computations need to be repeated for each well design independently. In this project, we take advantages of the equation-free spirit of convolutional neural network (CNN) to overcome this challenge and thus achieve the flexibility of efficient uncertainty quantification with various well controls. We are interested in the development of surrogate models for uncertainty quantification and propagation in reservoir simulations using a deep convolutional encoder-decoder network as an analogue to the image-to-image regression tasks in computer science. First, a U-Net architecture is applied to replace conventional expensive deterministic PDE solver. Then we adopt the idea from shape-guided image generation using variational U-Net and design a new variational U-Net architecture for "control-guided" reservoir simulation. Backward propagation is learned in the network to extract the hidden physical quantities and then predict the future production by the learned forward propagation using the hidden variable with various well controls. Comparisons in computational efficiency are made between our proposed CNN approach and conventional MC approach. Significant improvements in computational speed with reasonable accuracy loss are observed in the numerical tests.
In CS231N class, we have learned two different approaches to image generation in the context of deep learning: Variational Auto-Encoder (VAE) @cite_16 and Generative Adversarial Networks (GAN) @cite_8 . In @cite_0 , a conditional U-Net for shape-guided image generation, conditioned on the output of a VAE for appearance is presented. The separation between shape and appearance is carefully modelled and thus an explicit representation of the appearance, which can be combined with new poses, is obtained. Motivated by this work, we established the analogue between shape-guided image and control-guided saturation map. Similarly, the hidden appearance in image is analogous to the underlying permeability map. Under this framework, the variational U-Net is trained to learn the underlying Darcy's law. We take advantages of the nature of CNN, allowing the freedom of saturation map generation under various well controls. Efficient uncertainty quantification tasks can be conducted then to provide valuable evaluations of potential injection plans.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_8" ], "mid": [ "2962963674", "", "2099471712" ], "abstract": [ "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1907.00484
2953598839
We study network coordination problems, as captured by the setting of generalized network design (, STOC 2018), in the face of uncertainty resulting from partial information that the network users hold regarding the actions of their peers. This uncertainty is formalized using 's Bayesian ignorance framework (TCS 2012). While the approach of is purely combinatorial, the current paper takes into account computational considerations: Our main technical contribution is the development of (strongly) polynomial time algorithms for local decision making in the face of Bayesian uncertainty.
The technical framework that we use is inspired by @cite_24 . gives a detailed technical overview including a full comparison.
{ "cite_N": [ "@cite_24" ], "mid": [ "2791073292" ], "abstract": [ "In a generalized network design (GND) problem, a set of resources are assigned (non-exclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads. The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first non-trivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms." ] }
1907.00484
2953598839
We study network coordination problems, as captured by the setting of generalized network design (, STOC 2018), in the face of uncertainty resulting from partial information that the network users hold regarding the actions of their peers. This uncertainty is formalized using 's Bayesian ignorance framework (TCS 2012). While the approach of is purely combinatorial, the current paper takes into account computational considerations: Our main technical contribution is the development of (strongly) polynomial time algorithms for local decision making in the face of Bayesian uncertainty.
The Bayesian approach is often used in the game theoretic literature to model the uncertainty a player experiences regarding the actions taken by the other players. Roughgarden @cite_2 studies a (among other things) in which the players share (equally) the cost of the edges they use and proposes a theoretical tool called to analyze the of this game in a Bayesian setting, defined as @math , where @math denotes the set of . In particular, he proves that with the cost function @math , the PoA is bounded by @math . We employ the smoothness toolbox in our algorithmic construction, as further described in (see also the overview in ).
{ "cite_N": [ "@cite_2" ], "mid": [ "2136124238" ], "abstract": [ "We define smooth games of incomplete information. We prove an \"extension theorem\" for such games: price of anarchy bounds for pure Nash equilibria for all induced full-information games extend automatically, without quantitative degradation, to all mixed-strategy Bayes-Nash equilibria with respect to a product prior distribution over players' preferences. We also note that, for Bayes-Nash equilibria in games with correlated player preferences, there is no general extension theorem for smooth games. We give several applications of our definition and extension theorem. First, we show that many games of incomplete information for which the price of anarchy has been studied are smooth in our sense. Thus our extension theorem unifies much of the known work on the price of anarchy in games of incomplete information. Second, we use our extension theorem to prove new bounds on the price of anarchy of Bayes-Nash equilibria in congestion games with incomplete information." ] }
1907.00484
2953598839
We study network coordination problems, as captured by the setting of generalized network design (, STOC 2018), in the face of uncertainty resulting from partial information that the network users hold regarding the actions of their peers. This uncertainty is formalized using 's Bayesian ignorance framework (TCS 2012). While the approach of is purely combinatorial, the current paper takes into account computational considerations: Our main technical contribution is the development of (strongly) polynomial time algorithms for local decision making in the face of Bayesian uncertainty.
In @cite_14 , investigate online combinatorial optimization problems where the requests arriving online are drawn independently and identically from a known distribution. As an example, @cite_14 study the online Steiner tree problem on an undirected graph @math . In this problem, at each step the algorithm receives a terminal that is drawn independently from a distribution over @math , and needs to maintain a subset of edges connecting all the terminals received so far.
{ "cite_N": [ "@cite_14" ], "mid": [ "2064147138" ], "abstract": [ "In this paper, we study online algorithms when the input is not chosen adversarially, but consists of draws from some given probability distribution. While this model has been studied for online problems like paging and k-server, it is not known how to beat the Φ(log n) bound for online Steiner tree if at each time instant, the demand vertex is a uniformly random vertex from the graph. For the online Steiner tree problem, we show that if each demand vertex is an independent draw from some probability distribution π: V → [0, 1], a variant of the natural greedy algorithm achieves Eω[A(ω)] Eω[OPT (ω)] = O(1); moreover, this result can be extended to some other subadditive problems. Both assumptions that the input sequence consists of independent draws from π, and that π is known to the algorithm are both essential; we show (almost) logarithmic lower bounds if either assumption is violated. Moreover, we give preliminary results on extending the Steiner tree results above to the related \"expected ratio\" measure Eω[ω(ω) OPT (ω)]. Finally, we use these ideas to give an average-case analysis of the Universal TSP problem." ] }
1907.00484
2953598839
We study network coordination problems, as captured by the setting of generalized network design (, STOC 2018), in the face of uncertainty resulting from partial information that the network users hold regarding the actions of their peers. This uncertainty is formalized using 's Bayesian ignorance framework (TCS 2012). While the approach of is purely combinatorial, the current paper takes into account computational considerations: Our main technical contribution is the development of (strongly) polynomial time algorithms for local decision making in the face of Bayesian uncertainty.
Our work differs from @cite_14 in following four aspects. First, in the stochastic online optimization problem studied in @cite_14 , when each request @math arrives, the previous requests @math have been realized, and the realization is known. By contrast, in the BGND problem, every agent @math needs to be served without knowing the actual realization of the other agents. Second, the cost function studied in @cite_14 maps each resource @math to a fixed toll, which is subaddtive in the number of requests using @math , while our cost function is superaddtive. Third, in the BGND problem with the set connectivity requests, for each agent @math , each type @math is a set of terminals rather than a single terminal, and each action in @math is a Steiner tree spanning over the set of terminals corresponding to @math . Fourth, in the BGND problem, each prior distribution @math is over the types of agent @math , while there is no distribution over the agents.
{ "cite_N": [ "@cite_14" ], "mid": [ "2064147138" ], "abstract": [ "In this paper, we study online algorithms when the input is not chosen adversarially, but consists of draws from some given probability distribution. While this model has been studied for online problems like paging and k-server, it is not known how to beat the Φ(log n) bound for online Steiner tree if at each time instant, the demand vertex is a uniformly random vertex from the graph. For the online Steiner tree problem, we show that if each demand vertex is an independent draw from some probability distribution π: V → [0, 1], a variant of the natural greedy algorithm achieves Eω[A(ω)] Eω[OPT (ω)] = O(1); moreover, this result can be extended to some other subadditive problems. Both assumptions that the input sequence consists of independent draws from π, and that π is known to the algorithm are both essential; we show (almost) logarithmic lower bounds if either assumption is violated. Moreover, we give preliminary results on extending the Steiner tree results above to the related \"expected ratio\" measure Eω[ω(ω) OPT (ω)]. Finally, we use these ideas to give an average-case analysis of the Universal TSP problem." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
According to @cite_28 Technical Debt refers to invisible elements, because visible elements for improving, like new features for evolution or repairing defects for maintainability issues, should not be considered as debt. Technical debt should rather serve as a retrospect reflecting change of the environment, rapid success, or technological advancements as a possible cause for debt. However, the debt might actually be a good investment, but it's imperative to remain aware of this debt and the increased friction it will impose on the development team'' @cite_28 . Hence, tools are required to increase the awareness to identify debt and its causes, and to manage debt-related tasks. Finally, the debt should not be treated in isolation from the visible elements of evolving and maintaining. Consequently, debt is the invisible result of past decisions about software that negatively affect its future'' @cite_28 .
{ "cite_N": [ "@cite_28" ], "mid": [ "1965658570" ], "abstract": [ "The metaphor of technical debt in software development was introduced two decades ago to explain to nontechnical stakeholders the need for what we call now \"refactoring.\" As the term is being used to describe a wide range of phenomena, this paper proposes an organization of the technical debt landscape, and introduces the papers on technical debt contained in the issue." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
@cite_29 encountered the same phenomenon and point out that most code smells are introduced at their creation. Furthermore, the code often gets smellier due to new artifacts being build on top of suboptimal implementations. Even refactoring is often done wrong as it introduces further bad smells, which highlights the need for techniques and tools to support such processes @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2014216297" ], "abstract": [ "In past and recent years, the issues related to managing technical debt received significant attention by researchers from both industry and academia. There are several factors that contribute to technical debt. One of these is represented by code bad smells, i.e., symptoms of poor design and implementation choices. While the repercussions of smells on code quality have been empirically assessed, there is still only anecdotal evidence on when and why bad smells are introduced. To fill this gap, we conducted a large empirical study over the change history of 200 open source projects from different software ecosystems and investigated when bad smells are introduced by developers, and the circumstances and reasons behind their introduction. Our study required the development of a strategy to identify smell-introducing commits, the mining of over 0.5M commits, and the manual analysis of 9,164 of them (i.e., those identified as smell-introducing). Our findings mostly contradict common wisdom stating that smells are being introduced during evolutionary tasks. In the light of our results, we also call for the need to develop a new generation of recommendation systems aimed at properly planning smell refactoring activities." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
Further, Technical Debt tries to help to decide how to invest scarce resources: Like financial debt, sometimes technical debt can be necessary'' @cite_36 . Most of the time this debt is not visible, as @cite_28 also pointed out, leading to making debt visible as one purpose. Additionally, the value and present value play a role, including the difference between the actual state and an supposed ideal state as well as the time-to-impact. This involves a differentiation between structural issues (the potential technical debt) and the effect it has on actual development (the effective technical debt)'' @cite_23 , which could also be called problems and risks.
{ "cite_N": [ "@cite_36", "@cite_28", "@cite_23" ], "mid": [ "1614607847", "1965658570", "2082157362" ], "abstract": [ "Ward Cunningham in his experience report presented at the OOPSLA'92 conference introduced the metaphor of technical debt. This metaphor is related to immature, incomplete or inadequate artifacts in the software development cycle that cause higher costs and lower quality. A strategy for the technical debt management is still a challenge because its definition is not yet part of the software development process. Carolyn Seaman and Yuepu Guo proposed a technical debt management framework based on three stages. First, debts are identified and listed. After that, debts are measured by their payment efforts and then debts are selected to be considered in the software development cycle. This study evaluates the application of this framework in the real context of software projects adopting Scrum. Action research is conducted in two companies where their projects have significant technical debt. We performed three action research cycles based on the three stages of the framework for both companies. The main contribution of this paper is to provide real experiences and improvements for projects using Scrum and that may adopt the technical debt management framework proposed by Seaman and Guo. Both teams recognized that the proposed approach is feasible for being considered in the software development process after some modifications. Because of projects time constraints and ease of use, we reduced the use of the proposed metrics to two: Principal and the Current Amount of Interest. In consequence, decision-making was benefitted by the early consideration of the debts that really need to be paid. Instead of using probabilities to find the interest, these are registered every time the technical debt occurs. During the first phase, the debts identification was improved when all Scrum roles participated, while measurement and decision-making were improved when the team was responsible for these phases. The Product Owner role in both companies understood the importance of Technical Debt monitoring and prioritization during a development cycle. With these changes, the two teams mentioned they would remain using the resulting approach.", "The metaphor of technical debt in software development was introduced two decades ago to explain to nontechnical stakeholders the need for what we call now \"refactoring.\" As the term is being used to describe a wide range of phenomena, this paper proposes an organization of the technical debt landscape, and introduces the papers on technical debt contained in the issue.", "Over recent years the topic of technical debt has gained significant attention in the software engineering community. The area of technical debt research is somewhat peculiar within software engineering as it is built on a metaphor. This has certainly benefited the field as it helps to achieve a lot of attention and eases communication about the topic, however, it seems it is to some extent also sidetracking research work, if the metaphor is used beyond its range of applicability. In this paper, we focus on the limits of the metaphor and the problems that arise when over-extending its applicability. We do also aim at providing some additional insights by proposing certain ways of handling these restrictions." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
Addicks and Appelrath @cite_5 searched for key figures and their metrics in order to unitize the quality assessment of an application landscape. This approach can be applied to the EA as a whole, because business processes for example influence the application's quality. They stated that all key figures must at least fit one of the following three conditions: (a) it must be used for indications of applications and be based on the application’s attributes, (b) it must be an indicator of an application and its value is determined by attributes and relations from other EA artifacts (the applications’ enterprise context), and (c) it must indicate a landscape’s quality and therefore use all attributes of applications and their enterprise context'' @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "1969539959" ], "abstract": [ "This contribution presents a method to evaluate business applications. The method allows for using artifacts of enterprise architectures. Artifacts like business processes or hardware can exert influence on the application's quality and thus have to be regarded. A central aspect is to modularize the method's basic components, which are key figures and their metrics. The modularization allows for flexible usage and customization to fit the heterogeneity of different organizations. To make the method more intuitive, a key figure is encapsulated by a fuzzy logic based component denoted as criterion. A criterion addresses a certain aspect to evaluate an application and allows for using linguistic terms to represent a key figure." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
Since many aspects influence the system's properties a unified meta model is difficult to create. However, @cite_15 show that even different enterprise orientations with divergent focuses prefer certain qualities over others. In general, they show that the striven qualities differ across enterprises, nevertheless, there are some general qualities that are desired in most situations. Thus, a specialized prioritization and adaptation is needed for the EA and its IT business alignment.
{ "cite_N": [ "@cite_15" ], "mid": [ "2151597242" ], "abstract": [ "Enterprise Architecture models can be used to support IT business alignment. However, existing approaches do not distinguish between different IT business alignment situations. Since companies face diverse challenges in achieving a high degree of IT business alignment, a universal ‘one size fits all’ approach does not seem appropriate. This paper proposes to decompose the IT business alignment problem into tangible qualities for business, IT systems, and IT governance. An explorative study among 162 professionals is used to distinguish four IT business alignment situations, i.e. four clusters of IT business alignment problems. These situations each represent the current state according to certain qualities and also the priorities for future development. In order to increase IT business alignment, enterprise architecture meta models are proposed for each identified situation. One core meta model (to reflect common priorities) as well as situation specific extensions are presented." ] }
1907.00677
2954196635
In the software development industry, technical debt is regarded as a critical issue in term of the negative consequences such as increased software development cost, low product quality, decreased maintainability, and slowed progress to the long-term success of developing software. However, despite the vast research contributions in technical debt management for software engineering, the idea of technical debt fails to provide a holistic consideration to include both IT and business aspects. Further, implementing an enterprise architecture (EA) project might not always be a success due to uncertainty and unavailability of resources. Therefore, we relate the consequences of EA implementation failure with a new metaphor --Enterprise Architecture Debt (EA Debt). We anticipate that the accumulation of EA Debt will negatively influence EA quality, also expose the business into risk.
Ylimäki goes even further and defines twelve critical success factors for Enterprise Architecture. These factors obviously influence EA and its quality, although they are different to previously known aspects. Here high-quality EA is described with: high-quality EA conforms to the agreed and fully understood business requirements, fits for its purpose (e.g. a more efficient IT decision making), and satisfies the key stakeholder groups’ (the top management, IT management, architects, IT developers, and so forth) expectations in a cost-effective way understanding both their current needs and future requirements'' @cite_22 .
{ "cite_N": [ "@cite_22" ], "mid": [ "2275714656" ], "abstract": [ "First published in the Journal of Enterprise Architecture, Vol. 2, No. 4, 2006 pp. 29-40. Republished with the kind permission of the Journal of Enterprise Architecture" ] }
1907.00456
2953981431
Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment. These are critical shortcomings for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction. Thus, we develop a novel class of off-policy batch RL algorithms, which are able to effectively learn offline, without exploring, from a fixed batch of human interaction data. We leverage models pre-trained on data as a strong prior, and use KL-control to penalize divergence from this prior during RL training. We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning. The algorithms are tested on the problem of open-domain dialog generation -- a challenging reinforcement learning problem with a 20,000-dimensional action space. Using our Way Off-Policy algorithm, we can extract multiple different reward functions post-hoc from collected human interaction data, and learn effectively from all of these. We test the real-world generalization of these systems by deploying them live to converse with humans in an open-domain setting, and demonstrate that our algorithm achieves significant improvements over prior methods in off-policy batch RL.
The approach we propose is based on KL-control, a branch of stochastic optimal control (SOC) where the Kullback-Leibler (KL) divergence from some distribution is used to regularize an RL policy (e.g. ). Well-known examples include Trust Region Policy Optimization (TRPO) @cite_11 , and use conservative, KL-regularized policy updates to restrict the RL algorithm to stay close to its own prior policy (e.g. ). KL-control can also be applied to entropy maximization (e.g. @cite_5 ); for example, @math -learning penalizes KL-divergence from a simple uniform distribution in order to cope with overestimation of @math -values @cite_38 . Soft @math -learning motivates using a Boltzmann distribution in the value function as a way of performing maximum entropy RL @cite_50 . KL-control has also been used to improve transfer learning between maximum likelihood estimation (MLE) training on data, and training with RL @cite_15 . To the best of our knowledge, our work is the first to propose KL-control as a way of improving off-policy learning without exploration in a BRL setting.
{ "cite_N": [ "@cite_38", "@cite_50", "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "2963267001", "1757796397", "2098774185", "2591984255", "1771410628" ], "abstract": [ "Model-free reinforcement learning algorithms, such as Q-learning, perform poorly in the early stages of learning in noisy environments, because much effort is spent unlearning biased estimates of the state-action value function. The bias results from selecting, among several noisy estimates, the apparent optimum, which may actually be suboptimal. We propose G-learning, a new off-policy learning algorithm that regularizes the value estimates by penalizing deterministic policies in the beginning of the learning process. We show that this method reduces the bias of the value-function estimation, leading to faster convergence to the optimal value and the optimal policy. Moreover, G-learning enables the natural incorporation of prior domain knowledge, when available. The stochastic nature of G-learning also makes it avoid some exploration costs, a property usually attributed only to on-policy algorithms. We illustrate these ideas in several examples, where G-learning results in significant improvements of the convergence rate and the cost of the learning process.", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.", "This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.", "In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters." ] }
1907.00504
2955570464
In TCEs (Temporary Crowded Events), for example, music festivals, users are faced with problems accessing the Internet. TCEs are limited time events with a high concentration of people moving within the event enclosure while accessing the Internet. Unlike other events where the user locations are constant and known at the start (e.g. stadiums), the traffic generation and the user movement in TCEs is variable and influenced by the dynamics of the event. The movement of users can lead to overloads in APs (Access Point) in case they are fixed. In order to minimize this phenomenon, new techniques have been explored that resort to the adjustable positioning of APs integrated into UAVs (Unmanned Aerial Vehicles). In these scenarios, the dynamic of the location of the APs requires that tools of prediction of the users movements and, in turn, of the sources of traffic, gain particular expression when being related to the algorithms of positioning of the referred APs. In order to allow the development and analysis of new network planning solutions for TCEs, it is necessary to recreate these scenarios in simulation, which, in turn, requires a detailed characterization of this kind of events. This article aims to characterize and model the mobility and traffic generated by users in TCEs. This characterization will enable the development of new statistical models of traffic generation and user mobility in TCEs.
In @cite_1 , data from various 3G and LTE (Long Term Evolution) cell towers of metropolitan areas is extracted and analyzed. In order to organize the data, a clustering technique, called is used, which consists of grouping the data in pairs, these pairs being once again grouped, and so on, until it reaches the number of wanted clusters @cite_10 . This technique is used in order to identify or define areas where users have a characteristic behavior in the generation of traffic. The main goal of that work was to create a model that combines time, location and frequency information to analyze the traffic patterns of thousands of cell towers.
{ "cite_N": [ "@cite_10", "@cite_1" ], "mid": [ "1971318281", "2045522464" ], "abstract": [ "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.", "Understanding mobile traffic patterns of large scale cellular towers in urban environment is extremely valuable for Internet service providers, mobile users, and government managers of modern metropolis. This paper aims at extracting and modeling the traffic patterns of large scale towers deployed in a metropolitan city. To achieve this goal, we need to address several challenges, including lack of appropriate tools for processing large scale traffic measurement data, unknown traffic patterns, as well as handling complicated factors of urban ecology and human behaviors that affect traffic patterns. Our core contribution is a powerful model which combines three dimensional information (time, locations of towers, and traffic frequency spectrum) to extract and model the traffic patterns of thousands of cellular towers. Our empirical analysis reveals the following important observations. First, only five basic time-domain traffic patterns exist among the 9,600 cellular towers. Second, each of the extracted traffic pattern maps to one type of geographical locations related to urban ecology, including residential area, business district, transport, entertainment, and comprehensive area. Third, our frequency domain traffic spectrum analysis suggests that the traffic of any tower among the 9,600 can be constructed using a linear combination of four primary components corresponding to human activity behaviors. We believe that the proposed traffic patterns extraction and modeling methodology, combined with the empirical analysis on the mobile traffic, pave the way toward a deep understanding of the traffic patterns of large scale cellular towers in modern metropolis." ] }
1907.00504
2955570464
In TCEs (Temporary Crowded Events), for example, music festivals, users are faced with problems accessing the Internet. TCEs are limited time events with a high concentration of people moving within the event enclosure while accessing the Internet. Unlike other events where the user locations are constant and known at the start (e.g. stadiums), the traffic generation and the user movement in TCEs is variable and influenced by the dynamics of the event. The movement of users can lead to overloads in APs (Access Point) in case they are fixed. In order to minimize this phenomenon, new techniques have been explored that resort to the adjustable positioning of APs integrated into UAVs (Unmanned Aerial Vehicles). In these scenarios, the dynamic of the location of the APs requires that tools of prediction of the users movements and, in turn, of the sources of traffic, gain particular expression when being related to the algorithms of positioning of the referred APs. In order to allow the development and analysis of new network planning solutions for TCEs, it is necessary to recreate these scenarios in simulation, which, in turn, requires a detailed characterization of this kind of events. This article aims to characterize and model the mobility and traffic generated by users in TCEs. This characterization will enable the development of new statistical models of traffic generation and user mobility in TCEs.
In the case of article @cite_8 , the aim of the authors is to understand the dynamics of Internet traffic in large cellular networks, which, according to the authors, is useful for network design, problem solving, performance evaluation and optimization of the network. The data used for this study was collected from a telecommunications operator. Data corresponds to mobile traffic for a week. The method is used to categorize the various types of devices used. A traffic prediction model based on Markov chains is also created. This model, called Markov Model, has a state transition matrix defined by a set of properties, such as: number of rows equal to the number of columns and the sum of each row is always equal to 1. This model has the property that, while in a state, it is only possible to remain in the same state or transit to a contiguous state.
{ "cite_N": [ "@cite_8" ], "mid": [ "1986085123" ], "abstract": [ "Understanding Internet traffic dynamics in large cellular networks is important for network design, troubleshooting, performance evaluation, and optimization. In this paper, we present the results from our study, which is based upon a week-long aggregated flow level mobile device traffic data collected from a major cellular operator's core network. In this study, we measure and characterize the spatial and temporal dynamics of mobile Internet traffic. We distinguish our study from other related work by conducting the measurement at a larger scale and exploring mobile data traffic patterns along two new dimensions -- device types and applications that generate such traffic patterns. Based on the findings of our measurement analysis, we propose a Zipf-like model to capture the volume distribution of application traffic and a Markov model to capture the volume dynamics of aggregate Internet traffic. We further customize our models for different device types using an unsupervised clustering algorithm to improve prediction accuracy." ] }
1907.00504
2955570464
In TCEs (Temporary Crowded Events), for example, music festivals, users are faced with problems accessing the Internet. TCEs are limited time events with a high concentration of people moving within the event enclosure while accessing the Internet. Unlike other events where the user locations are constant and known at the start (e.g. stadiums), the traffic generation and the user movement in TCEs is variable and influenced by the dynamics of the event. The movement of users can lead to overloads in APs (Access Point) in case they are fixed. In order to minimize this phenomenon, new techniques have been explored that resort to the adjustable positioning of APs integrated into UAVs (Unmanned Aerial Vehicles). In these scenarios, the dynamic of the location of the APs requires that tools of prediction of the users movements and, in turn, of the sources of traffic, gain particular expression when being related to the algorithms of positioning of the referred APs. In order to allow the development and analysis of new network planning solutions for TCEs, it is necessary to recreate these scenarios in simulation, which, in turn, requires a detailed characterization of this kind of events. This article aims to characterize and model the mobility and traffic generated by users in TCEs. This characterization will enable the development of new statistical models of traffic generation and user mobility in TCEs.
The articles @cite_1 and @cite_8 are close to the desired solution, since in both cases there is a traffic characterization and in the second case there is a traffic prediction model. However, in these cases the APs are fixed, which, in case of TCE events, is disadvantageous compared to the use of UAVs. This is due to the fact that there may be obstacles in the event venue and interference with the crowd's field of vision. These studies are also distant from the intended since they deal with cases of cellular networks, and, in our case, a Wi-Fi based network solution is desired.
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "2045522464", "1986085123" ], "abstract": [ "Understanding mobile traffic patterns of large scale cellular towers in urban environment is extremely valuable for Internet service providers, mobile users, and government managers of modern metropolis. This paper aims at extracting and modeling the traffic patterns of large scale towers deployed in a metropolitan city. To achieve this goal, we need to address several challenges, including lack of appropriate tools for processing large scale traffic measurement data, unknown traffic patterns, as well as handling complicated factors of urban ecology and human behaviors that affect traffic patterns. Our core contribution is a powerful model which combines three dimensional information (time, locations of towers, and traffic frequency spectrum) to extract and model the traffic patterns of thousands of cellular towers. Our empirical analysis reveals the following important observations. First, only five basic time-domain traffic patterns exist among the 9,600 cellular towers. Second, each of the extracted traffic pattern maps to one type of geographical locations related to urban ecology, including residential area, business district, transport, entertainment, and comprehensive area. Third, our frequency domain traffic spectrum analysis suggests that the traffic of any tower among the 9,600 can be constructed using a linear combination of four primary components corresponding to human activity behaviors. We believe that the proposed traffic patterns extraction and modeling methodology, combined with the empirical analysis on the mobile traffic, pave the way toward a deep understanding of the traffic patterns of large scale cellular towers in modern metropolis.", "Understanding Internet traffic dynamics in large cellular networks is important for network design, troubleshooting, performance evaluation, and optimization. In this paper, we present the results from our study, which is based upon a week-long aggregated flow level mobile device traffic data collected from a major cellular operator's core network. In this study, we measure and characterize the spatial and temporal dynamics of mobile Internet traffic. We distinguish our study from other related work by conducting the measurement at a larger scale and exploring mobile data traffic patterns along two new dimensions -- device types and applications that generate such traffic patterns. Based on the findings of our measurement analysis, we propose a Zipf-like model to capture the volume distribution of application traffic and a Markov model to capture the volume dynamics of aggregate Internet traffic. We further customize our models for different device types using an unsupervised clustering algorithm to improve prediction accuracy." ] }
1907.00504
2955570464
In TCEs (Temporary Crowded Events), for example, music festivals, users are faced with problems accessing the Internet. TCEs are limited time events with a high concentration of people moving within the event enclosure while accessing the Internet. Unlike other events where the user locations are constant and known at the start (e.g. stadiums), the traffic generation and the user movement in TCEs is variable and influenced by the dynamics of the event. The movement of users can lead to overloads in APs (Access Point) in case they are fixed. In order to minimize this phenomenon, new techniques have been explored that resort to the adjustable positioning of APs integrated into UAVs (Unmanned Aerial Vehicles). In these scenarios, the dynamic of the location of the APs requires that tools of prediction of the users movements and, in turn, of the sources of traffic, gain particular expression when being related to the algorithms of positioning of the referred APs. In order to allow the development and analysis of new network planning solutions for TCEs, it is necessary to recreate these scenarios in simulation, which, in turn, requires a detailed characterization of this kind of events. This article aims to characterize and model the mobility and traffic generated by users in TCEs. This characterization will enable the development of new statistical models of traffic generation and user mobility in TCEs.
There are also several other studies on this topic, where is a technique often used @cite_6 @cite_4 @cite_0 . The technique ph K Means is used in @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_6" ], "mid": [ "2189625117", "2143557214", "2113213522" ], "abstract": [ "Clustering is widely used in different field such as biology, psychology, and economics. The result of clustering varies as number of cluster parameter changes hence main challenge of cluster analysis is that the number of clusters or the number of model parameters is seldom known, and it must be determined before clustering. The several clustering algorithm has been proposed. Among them k-means method is a simple and fast clustering technique. We address the problem of cluster number selection by using a k-means approach We can ask end users to provide a number of clusters in advance, but it is not feasible end user requires domain knowledge of each data set. There are many methods available to estimate the number of clusters such as statistical indices, variance based method, Information Theoretic, goodness of fit method etc...The paper explores six different approaches to determine the right number of clusters in a dataset", "Server side measurements from several Wi-Fi hot-spots deployed in a nationwide network over different types of venues from small coffee shops to large enterprises are used to highlight differences in traffic volumes and patterns. We develop a common modeling framework for the number of simultaneously present customers. Our approach has many novel elements: (a) We combine statistical clustering with Poisson regression from Generalized Linear Models to fit a non-stationary Poisson process to the arrival counts and demonstrate its remarkable accuracy; (b) We model the heavy tailed distribution of connection durations through fitting a Phase Type distribution to its logarithm so that not only the tail but also the overall distribution is well matched; (c) We obtain the distribution of the number of simultaneously present customers from an M t G ∞ queuing model using a novel regenerative argument that is transparent and avoids the customarily made assumption of the queue starting empty at an infinite past; (d) Most importantly, we validate our models by comparison of their predictions and confidence intervals against test data that is not used in fitting the models.", "This paper presents a new method for the estimation and characterization of the expected teletraffic in mobile communication networks. The method considers the teletraffic from the network viewpoint. The traffic estimation is based on the geographic traffic model, which obeys the geographical and demographical factors for the demand for mobile communication services. For the spatial teletraffic characterization, a novel representation technique is introduced which uses the notion of discrete demand nodes. We show how the information in geographical information systems can be used to estimate the teletraffic demand in an early phase of the network design process. Additionally, we outline how the discrete demand node representation facilitates the application of demand-based automatic mobile network design algorithms." ] }
1907.00710
2955416746
When traveling to a foreign country, we are often in dire need of an intelligent conversational agent to provide instant and informative responses to our various queries. However, to build such a travel agent is non-trivial. First of all, travel naturally involves several sub-tasks such as hotel reservation, restaurant recommendation and taxi booking etc, which invokes the need for global topic control. Secondly, the agent should consider various constraints like price or distance given by the user to recommend an appropriate venue. In this paper, we present a Deep Conversational Recommender (DCR) and apply to travel. It augments the sequence-to-sequence (seq2seq) models with a neural latent topic component to better guide response generation and make the training easier. To consider the various constraints for venue recommendation, we leverage a graph convolutional network (GCN) based approach to capture the relationships between different venues and the match between venue and dialog context. For response generation, we combine the topic-based component with the idea of pointer networks, which allows us to effectively incorporate recommendation results. We perform extensive evaluation on a multi-turn task-oriented dialog dataset in travel domain and the results show that our method achieves superior performance as compared to a wide range of baselines.
Task-oriented systems aim to assist users to achieve specific goals with natural language such as restaurant reservation and schedule arrangement. Traditionally, they have been built in pipelined fashion: language understanding, dialog management, knowledge query and response generation @cite_26 @cite_13 @cite_8 . However, the requirement of human labor in designing dialog ontology and heavy reliance on slot filling as well as dialog state tracking techniques limits its usage to relatively simple and specific tasks such as flight reservation @cite_30 or querying bus information @cite_37 . For travel which involves multiple sub-tasks and needs to handle various constraints for venues recommendation, such pipelined methods might not be sufficient.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_8", "@cite_13" ], "mid": [ "2675982228", "178897730", "2438667436", "1975244201", "2104544334" ], "abstract": [ "", "In this paper, we describe how a research spoken dialog system was made available to the general public. The Let’s Go Public spoken dialog system provides bus schedule information to the Pittsburgh population during off-peak times. This paper describes the changes necessary to make the system usable for the general public and presents analysis of the calls and strategies we have used to ensure high performance.", "In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method - in particular scalability - are briefly outlined.", "Statistical dialog systems (SDSs) are motivated by the need for a data-driven framework that reduces the cost of laboriously handcrafting complex dialog managers and that provides robustness against the errors created by speech recognizers operating in noisy environments. By including an explicit Bayesian model of uncertainty and by optimizing the policy via a reward-driven process, partially observable Markov decision processes (POMDPs) provide such a framework. However, exact model representation and optimization is computationally intractable. Hence, the practical application of POMDP-based systems requires efficient algorithms and carefully constructed approximations. This review article provides an overview of the current state of the art in the development of POMDP-based spoken dialog systems.", "We have proposed an expandable dialog scenario description and platform to manage dialog systems using a weighted finite-state transducer (WFST) in which user concept and system action tags are input and output of the transducer, respectively. In this paper, we apply this framework to statistical dialog management in which a dialog strategy is acquired from a corpus of human-to-human conversation for hotel reservation. A scenario WFST for dialog management was automatically created from an N-gram model of a tag sequence that was annotated in the corpus with Interchange Format (IF). Additionally, a word-to-concept WFST for spoken language understanding (SLU) was obtained from the same corpus. The acquired scenario WFST and SLU WFST were composed together and then optimized. We evaluated the proposed WFST-based statistic dialog management in terms of correctness to detect the next system actions and have confirmed the automatically acquired dialog scenario from a corpus can manage dialog reasonably on the WFST-based dialog management platform." ] }
1907.00529
2953580776
The fastest known classical algorithm deciding the @math -colorability of @math -vertex graph requires running time @math for @math . In this work, we present an exponential-space quantum algorithm computing the chromatic number with running time @math using quantum random access memory (QRAM). Our approach is based on 's quantum dynamic programming with applications of Grover's search to branching algorithms. We also present a polynomial-space quantum algorithm not using QRAM for the graph @math -coloring problem with running time @math . In the polynomial-space quantum algorithm, we essentially show @math -time classical algorithms that can be improved quadratically by Grover's search.
Beigel and Eppstein showed an efficient algorithm for the graph 3-coloring problem with running time @math @cite_1 . Byskov showed reduction algorithms from the graph @math -coloring problem to the graph 3-coloring problem @cite_9 . By using Beigel and Eppstein's graph 3-coloring algorithm, Byskov showed classical algorithms for the graph 4-, 5- and 6-coloring problems with running time @math , @math and @math , respectively. showed an algorithm for the graph 4-coloring problem with running time @math by using the path decomposition @cite_11 .
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_11" ], "mid": [ "2122639972", "1533811829", "1598159863" ], "abstract": [ "We give tight upper bounds on the number of maximal independent sets of size k (and at least k and at most k) in graphs with n vertices. As an application of the proof, we construct improved algorithms for graph colouring and computing the chromatic number of a graph.", "We consider worst case time bounds for several NP-complete problems, based on a constraint satisfaction (CSP) formulation of these problems: (a, b)-CSP instances consist of a set of variables, each with up to a possible values, and constraints disallowing certain b-tuples of variable values; a problem is solved by assigning values to all variables satisfying all constraints, or by showing that no such assignment exist. 3-SAT is equivalent to (2, 3)-CSP while 3-coloring and various related problems are special cases of (3, 2)-CSP; there is also a natural duality transformation from (a, b)-CSP to (b, a)-CSP. We show that n-variable (3, 2)-CSP instances can be solved in time O(1.3645n), that satisfying assignments to (d, 2)-CSP instances can be found in randomized expected time O((0.4518d)n); that 3-coloring of n-vertex graphs can be solved in time O(1.3289n); that 3-list-coloring of n-vertex graphs can be solved in time O(1.3645n); that 3-edge-coloring of n-vertex graphs can be solved in time O(2n 2), and that 3-satisfiability of a formula with t 3-clauses can be solved in time O(nO(1) + 1.3645t).", "We introduce a generic algorithmic technique and apply it on decision and counting versions of graph coloring. Our approach is based on the following idea: either a graph has nice (from the algorithmic point of view) properties which allow a simple recursive procedure to find the solution fast, or the pathwidth of the graph is small, which in turn can be used to find the solution by dynamic programming. By making use of this technique we obtain the fastest known exact algorithms - running in time O(1.7272n) for deciding if a graph is 4-colorable and - running in time O(1.6262n) and O(1.9464n) for counting the number of k-colorings for k = 3 and 4 respectively." ] }
1907.00529
2953580776
The fastest known classical algorithm deciding the @math -colorability of @math -vertex graph requires running time @math for @math . In this work, we present an exponential-space quantum algorithm computing the chromatic number with running time @math using quantum random access memory (QRAM). Our approach is based on 's quantum dynamic programming with applications of Grover's search to branching algorithms. We also present a polynomial-space quantum algorithm not using QRAM for the graph @math -coloring problem with running time @math . In the polynomial-space quantum algorithm, we essentially show @math -time classical algorithms that can be improved quadratically by Grover's search.
In 2006, Bj "orklund and Husfeldt, and Koivisto showed an exponential-space @math -time algorithm for the chromatic number problem on the RAM model @cite_5 , @cite_3 . These algorithms are based on the inclusion--exclusion principle. They also showed that if there is a polynomial-space @math -time algorithm counting the number of independent sets, then there is a polynomial-space @math -time algorithm computing the chromatic number @cite_5 , @cite_7 . Since the fastest known polynomial-space algorithm computes the number of independent sets with running time @math @cite_15 , there is a polynomial-space @math -time algorithm computing the chromatic number.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_7", "@cite_3" ], "mid": [ "2163938320", "2962886649", "2074359677", "2002425091" ], "abstract": [ "Given a set U with n elements and a family of subsets S 2^U we show how to count the number of k-partitions S_1 ? ? ? S_k = U into subsets S_i S in time 2^ n n^ O(1) . The only assumption on S is that it can be enumerated in time 2^ n n^ O(1) . In effect we get exact algorithms in time 2^ n n^ O(1) for several well-studied partition problems including Domatic Number, Chromatic Number, Bounded Component Spanning Forest, Partition into Hamiltonian Subgraphs, and Bin Packing. If only polynomial space is available, our algorithms run in time 3^ n n^ O(1) if membership in S can be decided in polynomial time. For Chromatic Number, we present a version that runs in time O(2.2461^n ) and polynomial space. For Domatic Number, we present a version that runs in time O(2.8718^n ). Finally, we present a family of polynomial space approximation algorithms that find a number between ( G ) and ( 1 + ) ( G ) in time O(1.2209^n + 2.2461^ e^ - n ).", "We present a polynomial-space algorithm that computes the number of independent sets of any input graph in time (O(1.1389^n) ) for graphs with maximum degree 3 and in time (O(1.2356^n) ) for general graphs, where n is the number of vertices. Together with the inclusion-exclusion approach of Bjorklund, Husfeldt, and Koivisto [SIAM J. Comput. 2009], this leads to a faster polynomial-space algorithm for the graph coloring problem with running time (O(2.2356^n) ). As a byproduct, we also obtain an exponential-space (O(1.2330^n) ) time algorithm for counting independent sets.", "Given a set @math with @math elements and a family @math of subsets, we show how to partition @math into @math such subsets in @math time. We also consider variations of this problem where the subsets may overlap or are weighted, and we solve the decision, counting, summation, and optimization versions of these problems. Our algorithms are based on the principle of inclusion-exclusion and the zeta transform. In effect we get exact algorithms in @math time for several well-studied partition problems including domatic number, chromatic number, maximum @math -cut, bin packing, list coloring, and the chromatic polynomial. We also have applications to Bayesian learning with decision graphs and to model-based data clustering. If only polynomial space is available, our algorithms run in time @math if membership in @math can be decided in polynomial time. We solve chromatic number in @math time and domatic number in @math time. Finally, we present a family of polynomial space approximation algorithms that find a number between @math and @math in time @math .", "We use the principle of inclusion and exclusion, combined with polynomial time segmentation and fast Mobius transform, to solve the generic problem of summing or optimizing over the partitions of n elements into a given number of weighted subsets. This problem subsumes various classical graph partitioning problems, such as graph coloring, domatic partitioning, and MAX k-CUT, as well as machine learning problems like decision graph learning and model-based data clustering. Our algorithm runs in O*(2^n ) time, thus substantially improving on the usual O*(3^n )-time dynamic programming algorithm; the notation O* suppresses factors polynomial in n. This result improves, e.g., Byskov?s recent record for graph coloring from O*(2.4023^n ) to O*(2^n ). We note that twenty five years ago, R. M. Karp used inclusion--exclusion in a similar fashion to reduce the space requirement of the usual dynamic programming algorithms from exponential to polynomial." ] }
1907.00529
2953580776
The fastest known classical algorithm deciding the @math -colorability of @math -vertex graph requires running time @math for @math . In this work, we present an exponential-space quantum algorithm computing the chromatic number with running time @math using quantum random access memory (QRAM). Our approach is based on 's quantum dynamic programming with applications of Grover's search to branching algorithms. We also present a polynomial-space quantum algorithm not using QRAM for the graph @math -coloring problem with running time @math . In the polynomial-space quantum algorithm, we essentially show @math -time classical algorithms that can be improved quadratically by Grover's search.
There is almost no previous theoretical work on quantum algorithms for the graph coloring problems. F "urer mentioned that Grover's algorithm can be applied to branching algorithms so that Beigel and Eppstein's algorithm for the graph 3-coloring problem can be improved to running time @math @cite_2 . The quantum algorithms for Theorem are basically obtained by applying Grover's search to generalized Byskov's reduction algorithms on the basis of F "urer's observation.
{ "cite_N": [ "@cite_2" ], "mid": [ "1727475799" ], "abstract": [ "In his seminal paper, Grover points out the prospect of faster solutions for an NP-complete problem like SAT. If there are n variables, then an obvious classical deterministic algorithm checks out all 2n truth assignments in about 2n steps, while his quantum search algorithm can find a satisfying truth assignment in about 2n 2 steps. For several NP-complete problems, many sophisticated classical algorithms have been designed. They are still exponential, but much faster than the brute force algorithms. The question arises whether their running time can still be decreased from T(n) to O (√T(n)) by using a quantum computer. Isolated positive examples are known, and some speed-up has been obtained for wider classes. Here, we present a simple method to obtain the full T(n) to O(√T(n)) speed-up for most of the many nontrivial exponential time algorithms for NP-hard problems. The method works whenever the widely used technique of recursive decomposition is employed. This included all currently known algorithms for which such a speedup has not yet been known." ] }
1907.00549
2800365265
Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.
The availability of modern commodity RGB-D sensors raised the interest in methods for accurate calibration thereof. Typical approaches @cite_8 @cite_10 use a calibration target together with intrinsic and extrinsic parameter estimation methods, such as the one proposed by Zhang @cite_5 . Since one usually has no access to the internals of the depth generation process, these attempts are insufficient for accurate depth correction, because they merely correct for lens effects. In contrast, the work of @cite_16 assumes planarity in the observed data. They propose to correct depth by computing a pixel-wise mean residual depth image over all calibration poses. @cite_17 exploit co-planarity in the structure of the calibration target and propose to model the observed depth as a linear function of the true depth. In the work of @cite_14 a second order polynomial per pixel is proposed to compensate for depth artefacts. The closest work compared to ours, in terms of applied methods, is by Amamra et. al @cite_6 , who use a Gaussian Process to predict absolute depth from spatial locations. More recently online depth calibration methods try to compensate depth errors using a visual SLAM system @cite_7 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_6", "@cite_5", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1976617541", "2775132987", "", "2120484897", "2167667767", "1965386961", "", "2097107088" ], "abstract": [ "Highlights? A calibration procedure for depth and color cameras. ? RGB, IR and D cameras joint calibration. ? Spatially variant depth correction model for calibrated RGB-D devices. ? A novel stereoscopic rendering technique by taking into account eyes' position. ? A stereoscopic Augmented Reality system for natural Human-machine interaction. A Human-machine interaction system requires precise information about the user's body position, in order to allow a natural 3D interaction in stereoscopic augmented reality environments, where real and virtual objects should coherently coexist. The diffusion of RGB-D sensors seems to provide an effective solution to such a problem. Nevertheless, the interaction with stereoscopic 3D environments, in particular in peripersonal space, requires a higher degree of precision. To this end, a reliable calibration of such sensors and an accurate estimation of the relative pose of different RGB-D and visualization devices are crucial. Here, robust and straightforward procedures to calibrate a RGB-D camera, to improve the accuracy of its 3D measurements, and to co-register different calibrated devices are proposed. Quantitative measures validate the proposed approach. Moreover, calibrated devices have been used in an augmented reality system, based on a dynamic stereoscopic rendering technique that needs accurate information about the observer's eyes position.", "Modern consumer RGB-D cameras are affordable and provide dense depth estimates at high frame rates. Hence, they are popular for building dense environment representations. Yet, the sensors often do not provide accurate depth estimates since the factory calibration exhibits a static deformation. We present a novel approach to online depth calibration that uses a visual SLAM system as reference for the measured depth. A sparse map is generated and the visual information is used to correct the static deformation of the measured depth while missing data is extrapolated using a small number of thin plate splines (TPS). The corrected depth can then be used to improve the accuracy of the sparse RGB-D map and the 3D environment reconstruction. As more data becomes available, the depth calibration is updated on the fly. Our method does not rely on a planar geometry like walls or a one-to-one-pixel correspondence between color and depth camera. Our approach is evaluated in real-world scenarios and against ground truth data. Comparison against two popular self-calibration methods is performed. Furthermore, we show clear visual improvement on aggregated point clouds with our method.", "", "In this work, we present a novel method to accurately calibrate active depth sensors such as the Microsoft Kinect. Our approach is based on the Gaussian Process Regression (GPR). It is applied after the standard calibration and it is particularly useful for the aging depth cameras. The latter, were proven to suffer from inaccuracies that cannot be fixed with the standard pinhole camera calibration procedure. Experimental results show the weaknesses of this naive calibration and the corrective of effect our algorithm. We further justify the possibility to extend the same approach to any other type of cameras with similar characteristics.", "We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.", "We analyze Kinect as a 3D measuring device, experimentally investigate depth measurement resolution and error properties and make a quantitative comparison of Kinect accuracy with stereo reconstruction from SLR cameras and a 3D-TOF camera. We propose Kinect geometrical model and its calibration procedure providing an accurate calibration of Kinect 3D measurement and Kinect cameras. We demonstrate the functionality of Kinect calibration by integrating it into an SfM pipeline where 3D measurements from a moving Kinect are transformed into a common coordinate system by computing relative poses from matches in color camera.", "", "Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples." ] }